Eric03 commited on
Commit
9357a7c
·
verified ·
1 Parent(s): 44c2175

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2004.04312/main_diagram/main_diagram.drawio +0 -0
  2. 2004.04312/paper_text/intro_method.md +36 -0
  3. 2004.12935/main_diagram/main_diagram.drawio +1 -0
  4. 2004.12935/main_diagram/main_diagram.pdf +0 -0
  5. 2004.12935/paper_text/intro_method.md +178 -0
  6. 2005.09812/main_diagram/main_diagram.pdf +0 -0
  7. 2010.13685/main_diagram/main_diagram.drawio +1 -0
  8. 2010.13685/main_diagram/main_diagram.pdf +0 -0
  9. 2010.13685/paper_text/intro_method.md +221 -0
  10. 2110.03618/main_diagram/main_diagram.drawio +1 -0
  11. 2110.03618/paper_text/intro_method.md +7 -0
  12. 2111.14792/main_diagram/main_diagram.drawio +1 -0
  13. 2111.14792/paper_text/intro_method.md +54 -0
  14. 2112.02889/main_diagram/main_diagram.drawio +0 -0
  15. 2112.02889/paper_text/intro_method.md +37 -0
  16. 2203.06063/main_diagram/main_diagram.drawio +1 -0
  17. 2203.06063/main_diagram/main_diagram.pdf +0 -0
  18. 2203.06063/paper_text/intro_method.md +86 -0
  19. 2203.16910/main_diagram/main_diagram.drawio +0 -0
  20. 2203.16910/paper_text/intro_method.md +169 -0
  21. 2207.04174/main_diagram/main_diagram.drawio +0 -0
  22. 2207.04174/paper_text/intro_method.md +120 -0
  23. 2207.10883/main_diagram/main_diagram.drawio +1 -0
  24. 2207.10883/main_diagram/main_diagram.pdf +0 -0
  25. 2207.10883/paper_text/intro_method.md +46 -0
  26. 2207.11761/main_diagram/main_diagram.drawio +1 -0
  27. 2207.11761/main_diagram/main_diagram.pdf +0 -0
  28. 2207.11761/paper_text/intro_method.md +106 -0
  29. 2209.06941/main_diagram/main_diagram.drawio +1 -0
  30. 2209.06941/main_diagram/main_diagram.pdf +0 -0
  31. 2209.06941/paper_text/intro_method.md +77 -0
  32. 2210.16834/main_diagram/main_diagram.drawio +1 -0
  33. 2210.16834/main_diagram/main_diagram.pdf +0 -0
  34. 2210.16834/paper_text/intro_method.md +59 -0
  35. 2301.02311/main_diagram/main_diagram.drawio +0 -0
  36. 2301.02311/paper_text/intro_method.md +80 -0
  37. 2302.09170/main_diagram/main_diagram.drawio +0 -0
  38. 2302.09170/paper_text/intro_method.md +58 -0
  39. 2303.09032/main_diagram/main_diagram.drawio +0 -0
  40. 2303.09032/main_diagram/main_diagram.pdf +0 -0
  41. 2303.09032/paper_text/intro_method.md +55 -0
  42. 2303.15493/main_diagram/main_diagram.drawio +1 -0
  43. 2303.15493/main_diagram/main_diagram.pdf +0 -0
  44. 2303.15493/paper_text/intro_method.md +107 -0
  45. 2304.06306/main_diagram/main_diagram.drawio +0 -0
  46. 2304.06306/paper_text/intro_method.md +55 -0
  47. 2305.19926/main_diagram/main_diagram.drawio +0 -0
  48. 2305.19926/paper_text/intro_method.md +101 -0
  49. 2305.20062/main_diagram/main_diagram.drawio +0 -0
  50. 2305.20062/paper_text/intro_method.md +44 -0
2004.04312/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2004.04312/paper_text/intro_method.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Learning a good language representation is a fundamental component of addressing a vision-language task, such as phrase grounding [22,34] or visual question answering [3,17]. Many recent methods have demonstrated that learning text representations aligned to images can boost performance across many visionlanguage tasks over traditional text-only trained representations [8,19,29,37,38]. This is often accomplished by using auxiliary vision-language tasks when learning the language representation (such as image-sentence retrieval, as shown in Figure 1(a)). However, these methods often only support a single language. Although some work has addressed a multilingual scenario (e.g., [16,23,41]), these
4
+
5
+ Project page: <http://ai.bu.edu/smalr>
6
+
7
+ ![](_page_1_Figure_2.jpeg)
8
+
9
+ - (a) Multilingual image-sentence retrieval
10
+ - (b) MSCOCO multilingual retrieval
11
+
12
+ Fig. 1: (a) presents multilingual bidirectional retrieval. We embed sentences in ten languages with SMALR, which is used to compute the highest scoring image. (b) shows the effect of the number of training languages on performance for prior work MULE [23] and LIWE [41]. LIWE is the original model, hereafter referred to as S-LIWE. The plot contains two points: L-LIWE, [41] trained with a larger embedding (120-D vs. 24-D) for fair comparison, in orange, and SMALR, in yellow. The points are scaled to the number of parameters, P; specifically, their area is $(\frac{P}{106})^{\frac{3}{2}}$ . SMALR is able to outperform all prior work with few parameters
13
+
14
+ methods do not scale well to support many languages in terms of memory or performance (see Figure 1(b)). As the number of languages grows, methods like LIWE [41] that use character-based recognition systems can save memory but suffer from performance degradation. In contrast, methods that learn to align word embeddings across languages can maintain (or even improve) performance as languages are added (e.g., [16,23]), but require additional parameters for the word embeddings that represent each new language's vocabulary.
15
+
16
+ This becomes a challenge when scaling to support many languages, as an increasing majority of trainable parameters are required for representing each language ( $e.g. \sim 93\%$ of parameters of [23] with ten languages). While pretrained word embeddings could be used without fine-tuning, e.g. Multilingual BERT [13] or MUSE [11], this comes at a significant cost in downstream task performance [8,23].
17
+
18
+ To address this trade-off between multilingual capacity and performance, we propose a Scalable Multilingual Aligned Language Representation (SMALR) model, which we demonstrate achieves strong task performance while also being highly compact compared to state-of-the-art word embedding methods [13,24,26]. As seen in Figure 1, LIWE drops over 10% in performance going from supporting one to ten languages. MULE slightly increases performance with more languages, but requires 6x more parameters compared to its single language model. Our approach, SMALR, outperforms both with only 1/5th the parameters of MULE. We learn to efficiently represent each language by separating our language embedding into language-specific and language-agnostic token representations. As language follows a long-tailed distribution, only a few words occur often, with
19
+
20
+ large portions of tokens occurring very rarely. For example, in the MSCOCO dataset [28] there are 25,126 unique tokens, but 61% of them occur less than 4 times. This suggests that having unique representations for every token in the vocabulary is unnecessary, as only a subset would affect downstream task performance significantly. Thus, we use a Hybrid Embedding Model (HEM) that contains language-specific embeddings for the common tokens, thereby providing a good representation for each language, and a compact language-agnostic representation for rare and uncommon words. This results in a model that needs far fewer unique embeddings than prior work without sacrificing performance.
21
+
22
+ We learn how to assign tokens to the language-agnostic representation in a pretraining step, which uses monolingual FastText embeddings [7] to map similar words to the same token, e.g. mapping "double-decker" in English and "imp´eriale" in French to the same shared token. Once we obtain our language embeddings, our goal is to align them so that semantically similar words, even those from other languages, are embedded nearby. To accomplish this, we use a multilingual masked language model, where we randomly mask words and then predict them based on context. Unlike similar masking approaches used to train models such as BERT [13], we mask words of sentences from any two languages, say German and Chinese, which are semantically similar sentences referring to the same image, and use the context from each to predict both masked tokens. To further encourage cross-language alignment, we also use an adversarial language classifier and neighborhood constraints that have been used in prior work [23]. These universal language embeddings are provided as input to a multimodal model that learns to relate them to images. Finally, we use a crosslingual consistency module that uses machine translations to reason about the image-sentence similarity across multiple languages, which we show significantly boosts performance. Figure 2 contains an overview of our model.
23
+
24
+ We use bidirectional image-sentence retrieval as the primary evaluation of our multilingual language representation. In this task, the goal is to retrieve a relevant sentence from a database given an image or to retrieve a relevant image from a database given a sentence. We augment current multilingual datasets Multi30K [6,14,15,43] and MSCOCO [27,28,31] using machine translations so that every image has at least five sentences across ten diverse languages: English (En), German (De), French (Fr), Czech (Cs), Chinese (Cn), Japanese (Ja), Arabic (Ar), Afrikaans (Af), Korean (Ko), and Russian (Ru). See the supplementary for details on our data augmentation procedure. This constitutes the highest number of languages used in multilingual learning for vision-language tasks to date, supporting more than double the number of visually-semantically aligned languages compared to prior work [5,11,16,23,36,41].
25
+
26
+ We list the contributions of our work below:
27
+
28
+ - SMALR, a scalable multilingual model for training visually-semantically aligned word embeddings that outperforms the state-of-the-art on multilingual image-sentence retrieval while also requiring few model parameters.
29
+ - A comparison to four types of vocabulary reduction methods that serve as baselines to complement our evaluation against prior work.
30
+
31
+ ![](_page_3_Figure_1.jpeg)
32
+
33
+ Fig. 2: The contributions of SMALR are in blue: a Hybrid Embedding Model (HEM), a Masked Cross-Language Model (MCLM), and a Cross-Lingual Consistency stage (CLC). HEM embeds input sentences as a mixture of languagespecific and language-agnostic representations using a hard attention mechanism. The MCLM component provides an additional loss to enforce language alignment, while also augmenting the original dataset with masked sentences
34
+
35
+ - A Masked Cross-Language Modeling (MCLM) procedure that further aligns the multilingual embedding, stabilizing variance in performance over all languages, and serves as an additional data augmentation technique.
36
+ - A Cross-Lingual Consistency (CLC) module, the first of its kind, that learns how to aggregate an ensemble of predictions across languages made with machine translations, which, combined with our SMALR architecture, results in a total improvement over the state-of-the-art by 3-4%.
2004.12935/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="www.draw.io" modified="2020-03-12T18:07:16.909Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" version="12.8.1" etag="-lRNJKHgjML5f1kHmXvW" type="google"><diagram id="jhzURAJYaulJyjThm1uh">7VxZd6u2Fv41fowXICY/ZjhpH067snrae9tHArKtFpALOMP59ZVAAoSELSYnaeMHLxBCiL2/PWqLFbhNXn7IgsP+JxzBeGUZ0csK3K0sy/N98k8bXqsGB7CGXYaiqsloGr6h77BqNHnrEUUwZ21VU4FxXKCD2BjiNIVhIbQFWYafxW5bHEdCwyHYQanhWxjEcuv/UVTsq1bf8pr2HyHa7fmTTXdTXUkC3pm9Sb4PIvzcagJfVuA2w7iojpKXWxhT2ol0ue+5Wk8sg2mhc4NV3fAUxEf2bmxexSt/2V2GjwfWDWYFfFGROHjk3Q15Cmb9YgQQECewyF5JFzaQy+7gWPDs6vy5oazLGb9vUdUC/treND/Aph4w5u7qJzXvTw4YCdTkAOfJQaiRRpD2N1bg5nmPCvjtEIT06jMBO2nbFwkZ/84kh3mR4b/gLY5xVt4N3NCHj1tyZYvT4j5IUExf+1eUEDhbxs/wmfz/gpMgZV0Y9k1Az1Ect4aKAuhvQ9IeBfm+nFDzRI5K2oLJ7FBBHwPojHs52eaYdZJjpmsLLHMlhtm2KzPMBMZ0DtkSQ2BEZJOd4qzY4x1Og/hL03rTsKykR93nK8YH1vgnLIpXRuzgWOAuG4OsuKaKgzSkOIW87R7RiZZIgGnU6UFaWtclKBjlT8kyDXDAF1T8zrrT4z/oY9YOO7t74bOiJ6/spAsoMkF2CvgLtc4rQlPqnoYJYQY+ZiHr5aiRk8E4KNCTOJQKB+zWB4zIQ1o6QlQSptGBEpn7Dhbsrg6a6mloAcw5rwKo8CBiDa5jtCO8uEtQFJVAC1hDSCgEM4o10nUbl6DYllCQhfo8q1tAFJVPORabVg+QRJVxX/5OgnEG9WB3NLpvrh1JQ5iWQqUDb7qCcN8d/4YybY8z9J30CjjX4+ARxg84RwXCwvT4i3ztdKhfqAUdlaXqwGNb/i4ND4X5UILDnsF6eBI4bnGSHFNqHy3jIcOEIijdrcD1ik4LmNU7qw87qFIRWAJUL/TasFDb8UFIMxVQnYN3zkbUw64pMw8AmXnuDMzzR5j+EcaekCJ7/b190rKt9LQxruUZt64DnYQwDvIchYKfoHQZ37ufIKq3cV4DUwGVFW+J6lRHghA7eG11OFAHIdf3M5gbe6/rlmzMDqCrCYz1RTbnbdl5ERgCKI4WYwBaxsqLbgTUspUKS7qwO+vJwDQNNTLbKtBQ2K+6caIvbDueCDrfEYeo5i/5wtJAjuELA12ZQMupHipUtnNSSs73d/1ZpYonE1piFRwOGT5kKCggxUSG/6Rpo1trdW1cxuSP1u+Lm3xrA0Ru2L5s8m1rGX/NNM9rwCo/paLEwPxKv6IMspB1tE5SVScZ1kPt3lSK0hfegLUzA3VV2T83Js+4eSQHO3rwwGTBMoh+hVsUoiBDlJhVP/KIuqvEGZEjQ1Nl0/yebmjjhzCkztaOul1c4Hjy1VCIaYEPCow0SDBNXfkye/KfF2KyRk6TEOJAD49JfB0W+FRo+YiLAic8KL0Jwr92JZtVYaSapC2e42NBwi3CdJ6mXyBR1gHCl2v3xnXHqso+VtahraAqLXdtbVzfk7Mflr3mkZSgMjdr1zPrGyax3b4022vK/ufY7pu6bHe4RC/FdY3s5aewz8N1U5vrywu7K3F5yKqIMTFRYqy9Otj7o33xQ2VKLhFO8tBDiCd7PELtMFEbJd5boOSDMJsnPupkRw3kE4mPBQDiTwSDOqQGHRNlUp0kDqKbNZCG4k7P6KzBhMU605fs3c9fH65gug/SkKCWPC5NcRGU9qw/PNFYyR9ufKAZOdA7Cd0FwnVgiT6o56kW31T1FPYchmiWjOW8OqfRKG21Y4pq5xJL9hdbfu/Yl6kqRZf5dU3Tp31ZiNeXXg4xPTET651O3Ha7m8aJ5RD57o367nOGRR7IUs76nKUbYX4sOT2ZH7Mn+EoBQYBEZCEoYIzKrDJKgh1kEvVuk8gqq/QmRSRdNKgyozzHMXfW+W2MVhNOrQato71PBfhe69PqeuGWhXSX8bnVa8lLqCE5xfv3kaiQopo+6Ui1yBOCz/mn9hmlfbyN0oteSv+ocrfVekuEngT+uX8faal4SbWrvCTbNelggsNLSQl+na/RoDTMYJCXoVG91pNNGrK1FFTOTmydOuFDvf70IaZbsaycbJCUslT+k5bH4+sVSvue3pFKgtyi6+a24c7UfFs2WJO+7KokXxbFoTXqQhnbBEGsixvk1WZbEbyCGYJXHjxd1uzPHkzIOvhS4YVkAq1OqsjTSxWdG2ejZ0mHRjs9s9WcVKf75CoVnUJmOYHR9ifrlKbpCUnNtWGcqegiJw8wI1gsVUjVNnCXhbApZQTGx3l3fbn0lvZwFNrDmacy68rsRLC+HlAVdYXiOJq7HaYC3jBOzqrT3T9XG3mi93TpkCu5uWlF3KpmMCcYLUqwPSPC/8b2ogGWV9+ikgei76z4h8oMoz8Z17lZOXcrrXXU0bIybz0S53m3wtU5sUWjAwTeBW+3OZwc48iZ/rew1v+uIP1SWUquH4018Pgq5LvOXNontJUOwM8mNzUXzc6/l6VlYcYInGopp6NhUbrFWTJSr47YSdWr6AZE9jy45oxgcGx5CEAR1gNrjlo8vgFWTNSkRZ2paSLciow5DhG+gkSP44SoA8uIgiJYhraSmdONOGdgiu2KNcY8oSUkWxSVU3NsKAZyDl9InnVZ8tvD/94BH+Ygut9ZKfYVOS53Kapb57WLVIt8VxJdVXh8KQXEmZNV1Chvae/uL3/zsMfhqv0ymgm8hTf1Pr2i5ZcuxnqRuh5YZ8mD8XbWooCeNQ9TtK7OyAAYWOMi4BF+DpcphSaiTF3pZoMVqfarCB5o2GlQMckICNKIbi6WTArekj4xJiFlFbOVqqrRadU0Rqk1a6TNmcO8eJ1tap5iBVe1T24W86Kqg+5Yk3m4HFb7x0NWYtbDNI3NMrMwcga+AV7DdWqL92J800h6Tv8e0WkCbIy10fqZHRhvbOVXh4TdcIoN8LY9A3XkpFd+JGoFpeXrNhsX//VbFkHPkv1ZLsyBUVUmahHdgpJDDBPy0h9IufRxps4hr1vCs/E7MYgpB34bxdaJWVTN5i083taeif/clokxHxcY5/FyRTnrNotRecjut26YfzzXMogtJ5Qubi+lDeLEQnotE+koKmeUunmW7a62nM7JYH7AaV6VQtWFGZRISuPJdv0bQw4+mpHt5C2GlU7ZGh/AWMr48g8NLG98I5hTvnwIo9vHEf691Y6AAkWiT1lUMwfDPhNLFzSXY7/FM87MKqys3bNndlkrO/ULM3zftN5GBelpY8s8bFs9jQWyXDz8/NyHNVYMh2SD5/Na55EnuZSnKwBgJIQtp7NyZ9ozQZicNp/Brro33xIHX/4B</diagram></mxfile>
2004.12935/main_diagram/main_diagram.pdf ADDED
Binary file (39.7 kB). View file
 
2004.12935/paper_text/intro_method.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ While NLP has not yet been applied to the field of SD, in recent years there have been notable applications of Artificial Intelligence (AI) in this area. This is testified by the rise of young research fields that seek to help meet the SDGs, as *Computational Sustainability* [@gomes2019computational] and *AI for Social Good* [@hager2019artificial; @DBLP:journals/corr/abs-2001-01818].
4
+
5
+ In this context, Machine Learning, in particular in the field of Computer Vision [@de2018machine], has been applied to contexts ranging from conservation biology [@kwok2019ai], to poverty [@blumenstock2015predicting] and slavery mapping [@DBLP:journals/remotesensing/FoodyLBLW19], to deforestation and water quality monitoring [@DBLP:journals/remotesensing/HollowayM18].
6
+
7
+ Despite its positive impact, it is important to recognise that some AI techniques can act both as an enhancer and inhibitor of sustainability. As recently shown by , AI might inhibit meeting a considerable number of targets across the SDGs and may result in inequalities within and across countries due to application biases. Understanding the implications of AI and its related fields on SD, or Social Good more generally, is particularly important for countries where action on SDGs is being focused and where issues are most acute [@unescoai; @unescolearningweek].
8
+
9
+ Various works highlight the importance of understanding the local context and engaging with local stakeholders, including beneficiaries, to achieve project sustainability. Where such information is not available, projects are designed and delivered based on the judgment of other actors (e.g. project funders, developers or domain experts, [@risal2014mismatch; @axinn1988international; @harman2014international]). Their judgment , in turn, is subject to biases [@kahneman2011thinking] that are shaped by past experiences, beliefs, preferences and worldviews: such biases can include, for example, preferences towards a specific sector (e.g. energy or water), technology (e.g. solar, hydro) or gender-group (e.g. solutions which benefit a gender disproportionately), which are pushed without considering the local needs.
10
+
11
+ NLP has the potential to increase the availability of community-specific data to key decision makers and ensure project design is properly informed and appropriately targeted. However, careful attention needs to be paid to the potential for bias in data collection resulting from the interviewers [@bryman2016social], as well as the potential to introduce new bias through NLP.
12
+
13
+ <figure id="fig:proj_dev_schema" data-latex-placement="t!">
14
+ <figure id="fig:upv_wheel">
15
+ <img src="wheel_smaller.png" style="width:4.2cm" />
16
+ <figcaption>User-Perceived Value wheel.</figcaption>
17
+ </figure>
18
+ <figure id="fig:proj_dev_schema">
19
+ <img src="Figure_1_Flowchart.png" style="width:10.2cm" />
20
+ <figcaption>Flowchart of the intersection between NLP (purple square) and the delivery of SD projects.</figcaption>
21
+ </figure>
22
+ <figcaption>Using UPVs (<a href="#fig:upv_wheel" data-reference-type="ref" data-reference="fig:upv_wheel">1</a>) to build sustainable projects: note the role of NLP (purple square in <a href="#fig:proj_dev_schema" data-reference-type="ref" data-reference="fig:proj_dev_schema">3</a>).</figcaption>
23
+ </figure>
24
+
25
+ # Method
26
+
27
+ As a means to obtain qualitative data with the characteristics mentioned above, we adapt the User-Perceived Values (UPV) framework [@hirmerthesis]. The UPV framework builds on value theory, which is widely used in marketing and product design in the developed world [@sheth1991we; @woo1992cognition; @solomon2002value; @boztepe2007user]. Value theory assumes that a deep connection exists between what consumers perceive as important and their inclinations to adopt a new product or service [@nurkka2009capturing].
28
+
29
+ In the context of developing countries, our UPV framework identifies a set of 58 UPVs which can be used to frame the wide range of perspectives on what is of greatest concern to project beneficiaries [@HIRMER2016UPVmethod]. UPVs (or *tier 3* (T3) values) can be clustered into 17 *tier 2* (T2) value groups, each one embracing a set of similar T3 values; in turn, T2 values can be categorized into 6 *tier 1* (T1) high-level value pillars, as follows: [@HIRMER2014145]:
30
+
31
+ 1. *Emotional*: contains the T2 values *Conscience*, *Contentment*, *Human Welfare* (tot. 9 T3 values)
32
+
33
+ 2. *Epistemic*: contains the T2 values *Information* and *Knowledge* (tot. 2 T3 values)
34
+
35
+ 3. *Functional*: contains the T2 values *Convenience*, *Cost Economy*, *Income Economy* and *Quality and Performance* (tot. 21 T3 values)
36
+
37
+ 4. *Indigenous*: containing the T2 values *Social Norm* and *Religion* (tot. 5 T3 values)
38
+
39
+ 5. *Intrinsic Human*: *Health*, *Physiological* and *Quality of Life* (tot. 11 T3 values)
40
+
41
+ 6. *Social significance*: contains the T2 *Identity*, *Status* and *Social Interaction* (tot. 11 T3 values)
42
+
43
+ The interplay between T1, T2 and T3 values is graphically depicted in the *UPV Wheel* (Figure [1](#fig:upv_wheel){reference-type="ref" reference="fig:upv_wheel"}).
44
+
45
+ See Appendix A for the full set of UPV definitions.
46
+
47
+ The UPV approach offers a theoretical framework to place communities at the centre of project design (Figure [3](#fig:proj_dev_schema){reference-type="ref" reference="fig:proj_dev_schema"}). Notably, it allows to (a) facilitate more responsible and beneficial project planning [@gallarza2006value]; and  (b) enable effective communication with rural dwellers. The latter allows the use of messaging of project benefits in a way that resonates with the beneficiaries' own understanding of benefits, as discussed by . This results in a higher end-user acceptance, because the initiative is perceived to have personal value to the beneficiaries: as a consequence, community commitment will be increased, eventually enhancing the project success rate and leading to more sustainable results [@hirmerthesis].
48
+
49
+ <figure id="fig:map" data-latex-placement="t!">
50
+ <figure id="fig:cards">
51
+ <img src="cuadraditos.png" style="height:4.2cm" />
52
+ <figcaption aria-hidden="true"></figcaption>
53
+ </figure>
54
+ <figure id="fig:game">
55
+ <img src="Group_UPV.JPG" style="height:4.2cm" />
56
+ <figcaption aria-hidden="true"></figcaption>
57
+ </figure>
58
+ <figure id="fig:map">
59
+ <img src="map.png" style="height:4.1cm" />
60
+ <figcaption aria-hidden="true"></figcaption>
61
+ </figure>
62
+ <figcaption>Playing the UPV game in Uganda. From left to right: <a href="#fig:cards" data-reference-type="ref" data-reference="fig:cards">4</a>) Cards for the items <em>generator</em>, <em>cow</em>, <em>flush toilet</em> and <em>newspapers</em> (adapted to the Ugandan context with the support of international experts and academics from the U. of Cambridge; <a href="#fig:game" data-reference-type="ref" data-reference="fig:game">5</a>) Women playing the UPV game in village <span class="smallcaps">(1)</span>; <a href="#fig:map" data-reference-type="ref" data-reference="fig:map">7</a>) Map of case-study villages.</figcaption>
63
+ </figure>
64
+
65
+ Data conveying the beneficiaries' perspective is seldom considered in practical application, mainly due to the fact that it comes in the form of unstructured qualitative interviews. As introduced above, data needs to be *structured* in order to be useful [@OECD; @unstats]. This makes the entire process very long and costly, thus making it almost prohibitive to afford in practice for most small-scale projects. In this context, the role of AI, and more specifically NLP, can have a yet unexplored opportunity. Implementing successful NLP systems to automatically perform the annotation process on interviews (Figure [3](#fig:proj_dev_schema){reference-type="ref" reference="fig:proj_dev_schema"}, purple square), which constitutes the major bottleneck in the project planning pipeline (Section [4.1](#sec:corpus){reference-type="ref" reference="sec:corpus"}), would dramatically speed up the entire project life-cycle and drastically reduce its costs.
66
+
67
+ In this context, we introduce the task of *Automatic UPV classification*, which consists of annotating each sentence of a given input interview with the appropriate UPV labels which are (implicitly) conveyed by the interviewee.
68
+
69
+ To enable research in UPV classification, we release S2I, a corpus of labelled reports from 7 rural villages in Uganda (Figure [7](#fig:map){reference-type="ref" reference="fig:map"}). In this Section, we report on the corpus collection and annotation procedures and outline the challenges this poses for NLP.
70
+
71
+ **The UPV game.** As widely recognised in marketing practice [@van2005consumer], consumers are usually unable to articulate their own values and needs [@ulwick2002turn]. This requires the use of methods that elicit what is important, such as laddering [@reynolds2001laddering] or Zaltman Metaphor Elicitation Technique (ZMET) [@coulter2001interpreting]. To avoid direct inquiry [@pinegar2006customers], developed an approach to identify perceived values in low-income settings by means of a game (hereafter referred to as *UPV game*). Expanding on the items proposed by , the UPV game makes reference to 46 everyday-use items in rural areas[^4], which are graphically depicted (Figure [4](#fig:cards){reference-type="ref" reference="fig:cards"}). The decision to represent items graphically stems from the high level of illiteracy across developing countries [@unesco2013adult].
72
+
73
+ Building on the techniques proposed by Coulter *et al.*   and Reynolds *et al.* , the UPV game is framed in the form of semi-structured interviews:\
74
+ [(1)]{.smallcaps} participants are asked to select 20 items, based on what is most important to them (*Select stimuli*),\
75
+ [(2)]{.smallcaps}  to rank them in order of importance; and finally,\
76
+ [(3)]{.smallcaps} they have to give reasons as to why an item was important to them. *Why-probing* was used to encourage discussion (*Storytelling*).
77
+
78
+ **Case-Study Villages. ** 7 rural villages were studied: 3 in the West Nile Region (Northern Uganda); 1 in Mount Elgon (Eastern Uganda); 2 in the Ruwenzori Mountains (Western Uganda); and 1 in South Western Uganda. All villages are located in remote areas far from the main roads (Figure [7](#fig:map){reference-type="ref" reference="fig:map"}). A total of 7 languages are spoken across the villages[^5].
79
+
80
+ <figure id="fig:stats_aggregated" data-latex-placement="t!">
81
+ <img src="all_codes.png" style="width:90.0%" />
82
+ <figcaption>UPV frequencies from the S2I corpus (see Appendix A for UPV definitions). </figcaption>
83
+ </figure>
84
+
85
+ **Data Collection Setting and Guidelines for Interviewers. ** For each village, 3 native speaker interviewers guided the UPV game. To ensure consistency and data quality, a two-day training workshop was held at Makerere University (Kampala, Uganda), and a local research assistant oversaw the entire data collection process in the field.
86
+
87
+ **Data Collection. ** 12 people per village were interviewed, consisting of an equal split between men and women with varying backgrounds and ages. In order to gather complete insight into the underlying decision-making process -- which might be influenced by the context [@barry2008determining] -- interviews were conducted both individually and in groups of 6 people following standard focus group methods [@silverman2013doing; @bryman2016social]. Each interview lasted around 90 minutes. The data collection process took place over a period of 3 months and resulted in a total of 119 interviews.
88
+
89
+ **Ethical Considerations. ** Participants received compensation in the amount of 1 day of labour. An informed consent form was read out loud by the interviewer prior to the UPV game, to cater for the high-level of illiteracy amongst participants. To ensure integrity, a risk assessment following the University of Cambridge's *Policy on the Ethics of Research Involving Human Participants and Personal Data* was completed. To protect the participants' identity, locations and proper names were anonymized.
90
+
91
+ **Data Annotation.** The interviews were translated[^6] into English, analysed and annotated by domain experts[^7] using the computer-assisted qualitative data analysis software *HyperResearch* [@hesse1991hyperresearch]. To ensure consistency across interviews, they were annotated following  , using cross-sectional indexing [@mason2002organizing]. Due to the considerable size of collected data, the annotation process took around 6 months.
92
+
93
+ We obtain a final corpus of 5102 annotated utterances from the interviews. Samples present an average length of 20 tokens. The average number of samples per T3 label is 169.1, with an extremely skewed distribution: the most frequent T3, *Economic Opportunity*, occurs 957 times, while the least common, *Preservation of the Environment*, only 7 (Figure [8](#fig:stats_aggregated){reference-type="ref" reference="fig:stats_aggregated"}).
94
+
95
+ 58.8% of the samples are associated with more than 1 UPV, and 22.3% with more than 2 UPVs (refer to Appendix B for further details on UPV correlation). Such characteristics make UPV classification highly challenging to model: the task is an extreme multi-class multi-label problem, with high class imbalancy. Imbalanced label distributions pose a challenge for many NLP applications -- as sentiment analysis [@li2011imbalanced], sarcasm detection [@liusarcasm], and NER [@tomanek2009reducing] -- but are not uncommon in user-generated data [@imran2016twitter]. The following interview excerpt illustrates the multi-class multi-label characteristics of the problem:
96
+
97
+ 1. *If I have a flush toilet in my house I can be a king of all kings because I can't go out on those squatting latrines* \[Reputation\]\[Aspiration\]
98
+
99
+ 2. *And recently I was almost rapped* (sic.) *when I escorted my son to the latrine* \[Security\]
100
+
101
+ 3. *That \[\...\] we have so many cases in our village of kids that fall into pit latrine* \[Safety\]\[Caring\]
102
+
103
+ <figure id="fig:augm">
104
+ <div class="minipage">
105
+ <img src="architecture_correct.png" style="width:95.0%" />
106
+ </div>
107
+ <div class="minipage">
108
+ <img src="examples_aug.png" />
109
+ </div>
110
+ <figcaption>Examples of negative samples generated through data augmentation.</figcaption>
111
+ </figure>
112
+
113
+ Further challenges for NLP are introduced by the frequent use of non-standard grammar and poor sentence structuring, which often occur in oral production [@cole1995challenge]. Moreover, manual transcription of interviews may lead to spelling errors, thus increasing OOVs. This is illustrated in the below excerpts (spelling errors are underlined):
114
+
115
+ - *Also men like phone [there]{.underline} are so jealous for their women for example like in the morning my husband called me and asked that are you in church; so that's why they picked a phone.*
116
+
117
+ - *A house keeps secrecy for example \[\...\] I can be bitten by a snake if I had sex outside \[\...\] you see, me I cannot because [may]{.underline} child is looking for [mangoes]{.underline} in the bush and finds me there, how do I explain, can you imagine!!*
118
+
119
+ As outlined above, given an input interview, the task consists in annotating each sentence with the appropriate UPV(s). The extreme multi-class multi-label quality of the task (Section [4.2](#sec:corpus_nlp){reference-type="ref" reference="sec:corpus_nlp"}) makes it impractical to tackle as a standard *multi-class classification* problem---where, given an input sample $x$, a system is trained to predict its label from a tagset $T=\{l_1, l_2, l_3\}$ as $x\rightarrow l_2$ (i.e. \[0,1,0\]). Instead, we model the task as a *binary classification* problem: given $x$, the system learns to predict its *relatedness* with each one of the possible labels, i.e. $(x, l_1) \rightarrow 0$, $(x, l_2) \rightarrow 1$ and $(x, l_3) \rightarrow 0$ [^8].
120
+
121
+ We consider the samples from the S2I corpus as *positive instances*. Then, we generate three kinds of *negative instances* by pairing the sample text with random labels. To illustrate, consider the three T2 classes *Convenience*, *Identity* and *Status*, which contain the following T3 values:
122
+
123
+ - *Contentment*$_{T2}$ = {*Aesthetic*$_{T3}$, *Comfort*$_{T3}$, \...}
124
+
125
+ - *Identity*$_{T2}$ = {*Appearance*$_{T3}$, *Dignity*$_{T3}$\...}
126
+
127
+ - *Status*$_{T2}$ = {*Aspiration*$_{T3}$, *Reputation*$_{T3}$, \...}
128
+
129
+ Moreover, *Contentment*$_{T2}$ $\in$ *Emotional*$_{T1}$ and {*Identity*$_{T2}$, *Status*$_{T2}$} $\in$ *SocialSignificance*$_{T1}$. Given a sample $x$ and its gold label *Aspiration*$_{T3}$, we can generate the following training samples:
130
+
131
+ - $(x, \text{\textit{Aspiration}}_{T3})$ is a *positive sample*;
132
+
133
+ - $(x, \text{\textit{Reputation}}_{T3})$ is a *mildly negative sample*, as $x$ is linked with a wrong T3 with the same T2;
134
+
135
+ - $(x, \text{\textit{Dignity}}_{T3})$ is *negative sample*, as $x$ is a associated with a wrong T3 from a different T2 class, but both T2 classes belong to the same T1; and
136
+
137
+ - $(x, \text{\textit{Aesthetic}}_{T3})$ is a *strictly negative sample*, as $x$ is associated with a wrong label from the another T2 class in a different T1.
138
+
139
+ In this way, during training the system is exposed to positive (real) samples and negative (randomly generated) samples.
140
+
141
+ A UPV classification system should satisfy the following desiderata: (1) it should be relatively light, given that it will be used in the context of developing countries, which may suffer from access bias[^9] and (2) the goal of such a system isn't to completely replace the work of human SD experts, but rather to reduce the time needed for interview annotation. In this context, false positives are quick to notice and delete, while false negatives are more difficult to spot and correct. Moreover, when assessing a community's needs and values, missing a relevant UPV is worse than including one which wasn't originally present. For these reasons, recall is particularly important for a UPV classifier.
142
+
143
+ In the next Section, we provide a set of strong baselines for future reference.
144
+
145
+ *Embedding Layer.* The system receives an input sample $(x, T3)$, where $x$ is the sample text $(e_1, ..., e_n)$, $T3$ is the T3 label as the sequence of its tokens $(e_1, ..., e_m)$, and $e_i$ is the word embedding representation of a token at position $i$. We obtain a T3 embedding $e_{T3}$ for each T3 label using a max pool operation over its word embeddings: given the short length of T3 codes, this proved to work well and it is similar to findings in relation extraction and targeted sentiment analysis [@tang2015effective]. We replicate $e_{T3}$ $n$ times and concatenate it to the text's word embeddings $x$ (Figure [\[fig:architecture\]](#fig:architecture){reference-type="ref" reference="fig:architecture"}).
146
+
147
+ *Encoding Layer.* We obtain a hidden representation $\vec{h}_{text}$ with a forward LSTM [@gers1999learning] over the concatenated input. We then apply attention to capture the key parts of the input text w.r.t. the given T3. In detail, given the output matrix of the LSTM layer $H = [h_1, ..., h_n]$, we produce a hidden representation $h_{text}$ as follows:
148
+
149
+ $\displaystyle
150
+ \begin{aligned}
151
+ M&=tanh(
152
+ \begin{bmatrix}
153
+ W_h H\\
154
+ W_v e_{upv} \otimes e_N
155
+ \end{bmatrix}
156
+ )\\
157
+ \alpha_{text}&=softmax(w^TM)\\
158
+ h_{text}&=H\alpha^T
159
+ \end{aligned}$
160
+
161
+ This is similar in principle to the attention-based LSTM by , and proved to work better than classic attention over $H$ on our data.
162
+
163
+ *Decoding Layer.* We predict $\hat{y} \in [0,1]$ with a dense layer followed by a sigmoidal activation.
164
+
165
+ Each T3 comes with a short description, which was written by domain experts and used during manual labelling (the complete list is in the Appendix A). We integrate information from such descriptions into our model as follows: given the ordered word embeddings from the UPV description $(e_1, ..., e_d)$, we obtain a description representation $h_{descr}$ following the same steps as for the sample text.
166
+
167
+ In line with previous studies on siamese networks [@yan2018few], we observe better results when sharing the weights between the two LSTMs. We keep two separated attention layers for sample texts and descriptions. We concatenate $h_{text}$ and $h_{descr}$ and feed the obtained vector to the output layer.
168
+
169
+ A clear hierarchy exists between T3, T2 and T1 values (Section [3](#sec:upv_theory){reference-type="ref" reference="sec:upv_theory"}). We integrate such information using multi-task learning [@caruana1997multitask; @DBLP:journals/corr/Ruder17a]. Given an input sample, we predict its relatedness not only w.r.t. a T3 label, but also with its corresponding T2 and T1 labels[^10]. In practice, given the hidden representation $h = h_{text} \oplus h_{descr}$, we first feed it into a dense layer $dense_{T1}$ to obtain $h_{T1}$, and predict $\hat{y}_{T1}$ with a sigmoidal function. We then concatenate $h_{T1}$ with the previously obtained $h$, and we predict $\hat{y}_{T2}$ with a T2-specific dense layer $\sigma(dense_{T2}(h \oplus h_{T1}))$. Finally, $\hat{y}_{T3}$ is predicted as $\sigma(dense_{T3}(h \oplus h_{T2}))$.
170
+
171
+ In this way, the prediction $\hat{y}_i$ is based on both the original $h$ and the hidden representation computed in the previous stage of the hierarchy, $h_{i-1}$ (Figure [\[fig:architecture\]](#fig:architecture){reference-type="ref" reference="fig:architecture"}).
172
+
173
+ ::: tabular
174
+ l P1.2cmP1.2cmP1.2cmP1.2cm &text & +att & +descr & +att+descr\
175
+ P & 77.5 & 78.1& **80.4** & 78.9\
176
+ R & 65.5 & **71.0**& 66.5 & 70.6\
177
+ $F_1$ & 71.0 & 74.2& 72.8 & **74.4**\
178
+ :::
2005.09812/main_diagram/main_diagram.pdf ADDED
Binary file (17.5 kB). View file
 
2010.13685/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-10-21T13:01:56.958Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.80 Safari/537.36" version="13.8.1" etag="fzwRU-c0Yl4d63z6_YZ_" type="google"><diagram id="5j3ARDCCARKTe-puaHFR">7V1Zc9s4Ev41ejQKjcb5GCfx7EOmKlV52J2nKUaiLW5oUUvRsT2/fhs8JF6SGB2WFIupik0AvLq/Rn/dODzCj48vf6TBfPpnMgnjkeCTlxF+GgmBwhj64UteixKLZcFDGk2KIlgVfIv+CctCXpY+RZNw0WiYJUmcRfNm4TiZzcJx1igL0jR5bja7T+LmU+fBQ9gp+DYO4m7pv6NJNi2/QvFV+b/C6GFaPRl4WfMYVI3LWyymwSR5LoryNvh5hB/TJMmK3x5fPoaxF14ll+JGd2tqly+WhrNsyAWiuOBnED+V3zYSOqZLbxfzYOZfMHstv1r/78m/1e19Mstu7oPHKCbtfRj5B+jgcU41I4Eo6ec0jH+GWTQOOjW+df5/fpNFrlh/C6nmL6264rm+cpakj0HcrH4uxZtf7KVbVsZhloXpDb38OJo99F6fhS/ZTRBHD7OiekyiCtNmdTSb5AL09bz2anlllgazxT3dtLr9LFw2eE7SSfPp9csn0WIeB6XYolkc1a68j5Mgq9+xkjf99lD+jCv5r1dMW6btuyxIFHF1q+9pVXwfzOiju+VVCSGoeG6nuMBJVSwabya8vKj8eRpl4TeSii9+pj7BgyR7jOkMlpf9DFNqvhbHsLQO6lbC5DHM0ldqUl4gwCGDEs1lr6KUkkXB88pIwZSmOK0ZaGW4QdkvPCzvvzId+qW0nn5LwqslvS9LioP0ISyU1mNMyVN2wdYknemxJt61Jncka5I91tSSxkOaPM03fW/p/YPv1RX8l+WAAhi0xKCBGdWVhOIMVFcYAIqp/eWhOp8fToiSlKc5zPGWBELW5q/wlpCk2TR5SGZB/CVJ5iU4/ktW/VoSquApS5rQWWRBmpW1ms7D2aR2tsjS5MeS8Iil6P2brAWa7BdwGsZBFv1sXtcnoPLSr0mU22qlYQTGwa0O21AS2OYNM2+qWXmPOhtq35bTbTUBuDqaupfIELq11VMWyVM6DjtPyVW7/PRB2tZXX/K+fMkGVnbpjsQy7Zo9KIJk3PYwM8d6OlAtD9F/mqtFvS+LKtnZu4hz1hmU7CFn5iB0xA6nZ8TCsoAAkG4TwkHoGmcDOxuia9b20TWxv3TcdunQBdF8EW7HTLCYFxms++jFU7vb+yiOPyZxkq543zpitgVpdqMwlWPAjRbWauWMwybDQsVM/ehKmBgacCeMRcUVKKllV9rr2uwj/CpHeHlUeY0+9qTKSjIpQaABQMkF1009GoaNQ+/KnMmcrJBGkse2XMLRuHF156t+S/0CaZDjyowa+tWCicaxo3qZT6ZpdGCUUMZg0wMZyXT9MEdTfk9P+p6VD1IzCoY1L0wPZVMtpt5JW8DdlO80k2hXrqD5EGU8NpYHOHU05eNV+Q3l1zhMlaHg1NnXjx01bphzVU9eZbarR7iG55fiaOqWV3XXlZILXlqpKnNvqsXu19EPA9ABtXuxGc0juXFFJqcNh9KVt3pyCvoax252Tc5CgFWqek7TsjuxzwG1vT6jOTTMV7wvzP84DShqpGCqTEOsieL7w/UGTjwSWnFVT6hV5lM+lckUvPUxVjQO4g9lxWM0mcTr4romnCfBYpqfHCo3QP31cmii8s7C9IzaEJ8TPak2cYCBG+jLtG0w9JVNf16V7m/3a+y4QHRZZAcLeE/bFhIdqw1WtFIUxJl6arfY9i5G2Je1OYQRkrqCyThYZFcr9EmoXnUulc37amvG2Ze0O4hhujdxuOFLlP3HX+57mOL0r1rVp5fy1vnJa3mSe+kPfmIWFXyPk/GPqvAuivtcuSlcefsSKqpdsPL1soOtkcC7O8s577AALFFfXgqyPK9dWYz+DexlSs9XmHGz7JA9z1AYVFA69+TkNiOzimnZ8nXaD98ux8prNiUss2Y1bst5Ty5yXZu9hH2xuaojOUI0mjkrOnqzTArr0LRyB0N5rUCitlJB+8bKKUaR0tEyElu4zVW/pX6FYkprsbt2RWfMS5O9Uih8RN1ebLbpWCRW9WhBObaz0UpBJMF1sGLY8SbWiIvNKR1LqbrPtPZSas8NoXvDA+r0YjNJb6lTjvvotDvnQEP3hgfUaV++qKXkYoLBOi55gNkE2iFrU8zl8HKNXSrQTPWkU6TUzLSGwnZikQNSKvsLoxD5Wmloy9BZJ0GT0xVYDTPWB/wNJ0oulLHUqSu0rkciFPxqi1KD52bLOGgv4QyYinLEeAaHxjOFEteDzXHmuDYgQBiHyjRgBxIpOEFtFDplpKxG6evS54xIsEBSADUBgdgV/7o2e8n/tJNd3kb+fgCcxC658sl514P9E0kfTxvNv5H0HfUaoACVN/fukp6TCR/egfCJS1hrASjeIrZPXc8ZyV9sl/9iGsz9r9FjvqSzLvF2zjjzRHBZ+iX4HsZfk0WURYmv/Z5kWfI48hNLqeI2GP94yCllLS14nx/UJH/Yh0qhvE+75ft8mmaZX7P6wX+0uBtPZpJF5K3v/cTclI3pieJuEmQB/fDlC38a/gzjZO5n/4bZDcnxDrwTvEuyKZXR72zu0/AttEDOgbOg/Jwbt8p81j5BfvL/DgUsCwzASaX8QLKWzUw4ECNFy61TVnLtrOjiihooa5XRVilhK6fXgFV/k71Q1beQ74qqS0GVRaasMdJoouWqGo08PaoGLGi7ouq0qFLACVYGwHLUzakRQlNsY5wxXGjNda8XPA2u1G/BQUD7meAAFN44qY1szS894/gHB2QqLl8B5xsADZlucvniP9cI6PdIvmyR/jmHQAOyL1dacTpaAX51BqC11GeC4Kiaq2/ONQaSA9JKV1idL6zONAiSAxJmV1idFlaGKJ5BMFZJ0WYh5xoEVWtQLp2G+FkyoJQV2im+5AfnHwPJASmzi5f/2YZAckBq6fKlf6YRkPxN8i8bhX/GAZAckH65UorTUQq/TYREDspqjqCas2nONvwZkFS6gupcQXWuwc+AXNkVVKcDlWaEFuPAKkTDsbVPyvnGPm+zrOtXZr/iaMDirOKtN06PXaew/Vdac2GM09wgRVHLIbRK036HLMvRCGcl0Z6Bk1x7tubIczRSoXISgZtmMO3X4Gpr6R1ASqDubNiWlTvMpVVD5oh11VVf0zejhxWL+oyqzv/Kz612VcFqaV9+tlrb14ONxio9av01TCP6pHyTsA0L64agoz4xd8PmtMcFEYBjFq0Dck/UCneFkF8gwVEYa7hzrb0+PC1HikiVE9JvASOOBqDzW8V25l2M4JIBEVutLFFbv0yp1cUYb/vWaWlAO71jD+Mn7kmtFbkVghi0VhpTfMUIn8ppjZy49cA9cXfBx5D027WDOSiEvPtwktyG1lJqTu12xxASRMDvDydla68f4jpEtS15MKAG0g7bdmQXCK3fPX/wNqC9WxOM/IovORupz/Hf/xQnv/v2BNtSPdQv5OsvyK+gIm7S5CWSgEXFILQFIEx0F1MrIrGoyOXxPJVTMeHGWmrNlM/3WA4+SqvydXttgN6X5TwcQl6u2BgVMRD3tFcJnwZsRdaoJDFjJGwQ7dGk3O4IxImw0ZeEPRw2Xq/YGBUzJNGSr5GGIADuUrBxTRDvnnYhAnL0vIt1TBKcrEHiIsq2wuR85zOuBFEgI/iSBZ087aKuKeLzhhUxW1bERpJLpRW28nl+y1pE4I46ImK2XZpzIlxds8TnjavN3ZX/AzJWITecwiZOrKJnmsBpcHWdKHreuNrSX7WHSqvNcU8OLP0LU0XHT2n8epsSHvxOaNtmQ6z4LPRqqIdA15TGD6SXGySLlor0AVZoT0VacyP8lmNaajJ1w1cbV9Q3H9NMG5CONKxy6ffwXgPMkUI0xdtO62pN/V56+YW5lm+pl2Iob5hmNu9IIVAwpR31shoFIHXJzYjE7y5EdsIBffBSJc9/TTGgDNNCC2OUXxrdzrHvpJi+bOmvBau4IVgdqY9Z9BhS58e/vJO4dQtMpGZSAxLPI20rwZsw8XvbOGe0I5/urKz2mD953KqPmxF9L7nQLdgAyxy5Q+49rxO69WcEkawfNFp0hkKFVQ9xenAcNxk6+/u9pEM3w+MGOhtAgVP+z62Qp7fktP3ai3OBxICJqCfx9wV5PoQ6JEjm/bAh3i5Fe3WC8vsq++FO/xdq0FW1pydi+299v9lW30t6egs4vJtHieTJjVZGtoZLrWCGk8oFB2tWG82+tdnSaZp4LS7r/iDpTP9MJqFv8X8=</diagram></mxfile>
2010.13685/main_diagram/main_diagram.pdf ADDED
Binary file (27.2 kB). View file
 
2010.13685/paper_text/intro_method.md ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Credit assignment, i.e. determining how to correctly associate delayed rewards with states or stateaction pairs, is a crucial problem for reinforcement learning (RL) agents (Sutton and Barto, 2018). Model-based RL (MBRL) agents gradually learn a model of the rewards and transition dynamics through interaction with their environments and use the estimated model to find better policies or predictions (e.g., Sutton, 1990a; Peng and Williams, 1993; Moore and Atkeson, 1993; McMahan and Gordon, 2005; Sutton et al., 2012; Farahmand et al., 2017; Farahmand, 2018; Abachi et al., 2020; Wan et al., 2019; Silver et al., 2017; Schrittwieser et al., 2019; Deisenroth et al., 2015; Hester and Stone, 2013; Ha and Schmidhuber, 2018; Oh et al., 2017). However, the efficiency of MBRL depends on learning a useful model for its purpose. In this paper, we focus specifically on the use of models for value credit assignment.
4
+
5
+ Broadly, we refer to *planning* as any internal processing that an agent can perform without additional experience to improve prediction and/or performance. Within this nomenclature, we define *models* as knowledge about the internal workings of the environment, which can be routinely re-used through planning. One way of using models is by *forethought* or *trying things in your head* (Sutton and Barto, 1981), which requires learning to predict aspects of the future and planning *forward*, or *in anticipation* to achieve goals. Dyna-style planning (Sutton, 1990a) chooses a (possibly hypothetical) state and action and predicts the resulting reward and next state, which are then used for credit assignment.
6
+
7
+ The origins of the chosen state and action are referred to as *search control strategies*. Lin (1992) proposed to use states actually experienced, and introduced the idea of replaying prior experience (Lin, 1992; Mnih et al., 2015). Combinations of these two approaches result in *prioritized sweeping* variants (Moore and Atkeson, 1993; Peng and Williams, 1993; McMahan and Gordon, 2005), which generalize the idea of replaying experience in backward order (Lin, 1992) by prioritizing states based on the potential improvement in the value function estimate upon re-evaluation. From high priority states, forward models are used to perform additional value function updates, increasing computational efficiency (e.g., van Seijen and Sutton, 2013). An investigation into *source-control strategies* (Pan et al., 2018) reveals the utility of additional prioritization for guiding learning towards relevant, causal or interesting states.
8
+
9
+ In this paper, we work to understand how different phenomena caused by the structure of an environment favor the use of forward or backward planning mechanisms for credit assignment. We define *backward models* as learning to predict potential predecessors of observed states, either through explicit predictors of the environment or via planner-aware models, where the latter account for how the planner performs credit assignment. Backward models are interesting from two standpoints: (i) they can be used to causally change predictions or behaviour in hindsight, thereby naturally prioritizing states where predictions need to be (re)-evaluated; (ii) modeling errors in backward models can sometimes be less detrimental, because updating misspecified imaginary states with real experience may be less problematic as the reverse (van Hasselt et al., 2019; Jafferjee et al., 2020). We hope that additional understanding of the mechanisms of backward planning paves the way for new, principled algorithms that use models to seamlessly integrate both forethought and hindsight (as had been the case in traditional planning methods – LaValle, 2006).
10
+
11
+ The estimation and usage of models can be done in many ways (van Hasselt et al., 2019; Van Seijen and Sutton, 2015; Parr et al., 2008; Wan et al., 2019). The conventional approach is to learn explicit predictors of the environment which, if accurate enough, lead to good policies. However, no model is perfect. Model error is dependent on the choice of predictors and whether the true environment dynamics can be represented. *Planner-aware model learning* suggests learning instead only aspects of the environment relevant to the way in which the model is going to be used by the planner. This class of methods (Farahmand et al., 2017; Farahmand, 2018; Joseph et al., 2013; Silver et al., 2017; Oh et al., 2017; Farquhar et al., 2017; Luo et al., 2019; D'Oro et al., 2019; Schrittwieser et al., 2019; Abachi et al., 2020; Ayoub et al., 2020) incorporates knowledge about the value function or policy when learning the model. We describe a spectrum of methods for model estimation. On one end, we have environment predictors that rely on maximum likelihood estimation based on supervised learning. Towards the opposite end, constraints can be progressively relaxed by accounting how planners use the models, ultimately leading to fully abstract models – i.e. learnable black boxes (Xu et al., 2020; Oh et al., 2020).
12
+
13
+ **Contributions** We investigate the emergent properties of planning forward and backward with learned models of the world. We justify the use of backward models in identifying relevant states from which to recompute prediction errors, for which we establish available design choices to be made with respect to what the model represents, how it is estimated, and how it is parametrized. We review these in the broader context of model estimation. Finally, we conduct an empirical study on illustrative prediction and control tasks, which builds intuition and provides evidence for our findings.
14
+
15
+ We consider the usual RL setting (Sutton and Barto, 2018) in which an agent interacts with an environment, modelled as a (discounted) Markov decision process (MDP) (Puterman, 1994) $(S, A, \mathcal{P}^*, r^*, \gamma)$ , with state space S, action space S, state-transition distribution $\mathcal{P}^*: S \times \mathcal{A} \to \mathcal{P}(S)$ (where $\mathcal{P}(S)$ is the set of probability distributions on the state space and $\mathcal{P}^*(s'|s,a)$ denotes the probability of transitioning to state S from S by choosing action S, reward function S denotes the probability of transitioning action S and S denotes the probability of choosing action S and S denotes the probability of choosing action S and S denotes the probability of choosing action S and S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of choosing action S denotes the probability of transition S denotes the probability of transition S denotes the probability of transition S denotes the probability of transition S denotes the probability of transition S denotes the probability of transition S denotes the probability
16
+
17
+ In general, the agent does not directly observe the true state of the environment s, and instead observes or constructs a feature vector $\mathbf{x}(s)$ . The value function can then be approximated using a parametrized function $v_{\mathbf{w}}(s) \approx v_{\pi}(s)$ with $\mathbf{w} \in \mathbb{R}^d$ and d the size of feature representation.
18
+
19
+ This estimate can be linear: $v_{\mathbf{w}}(s) = \mathbf{w}^{\top} \mathbf{x}(s)$ , or a non-linear arbitrary function in the general case. As a shorthand, we use $\mathbf{x}_t = \mathbf{x}(s_t)$ .
20
+
21
+ Usually, the true model $\mathcal{P}^*$ and reward function $r^*$ are not known to the agent. Instead, the agent interacts with the environment to collect samples and updates value prediction estimates:
22
+
23
+ $$\mathbf{w}_{t+1} = \mathbf{w}_t + \underbrace{\alpha \left[ Y_t - v_{\mathbf{w}_t}(S_t) \right] \nabla_{\mathbf{w}_t} v_{\mathbf{w}_t}(S_t)}_{\equiv \Delta \mathbf{w}_t}, \tag{TD update}$$
24
+
25
+ where $Y_t$ is an *update target*. For instance, we could use Monte Carlo returns $Y_t = G_t$ , or *temporal difference (TD) errors* (Sutton, 1988) $Y_t - v_{\mathbf{w}_t}(S_t) = \delta_t \equiv R_{t+1} + \gamma v_{\mathbf{w}_t}(S_{t+1}) - v_{\mathbf{w}_t}(S_t)$ .
26
+
27
+ For control, an optimal action-value function $q_{\mathbf{w}}$ can be learned using Q-learning (Watkins and Dayan, 1992) updates: $\mathbf{w}_{t+1} = \mathbf{w}_t + \alpha \left[ R_{t+1} + \gamma \max_a q_{\mathbf{w}}(S_{t+1}, a) - q_{\mathbf{w}}(S_t, A_t) \right] \nabla_{\mathbf{w}_t} q_{\mathbf{w}_t}(S_t)$ .
28
+
29
+ A MBRL agent learns an estimate $\mathcal{P}$ of the true model $\mathcal{P}^*$ and r of the reward function $r^*$ , a process called *model learning*. The agent can then employ a planning algorithm, that uses additional computation without additional experience to improve its predictions and/or behaviour. Usually, Planner uses models that look forward in time and anticipate a future state and reward conditioned on their input. A different option is a retrospect Planner, which uses models that look backward in time and predict a predecessor state and corresponding reward.
30
+
31
+ Conventional approaches to model learning focus on learning good predictors of the environment, and ignore how Planner uses the model. Limited capacity and sampling noise can lead to model approximation errors, and the model can potentially choose to pay attention to aspects of the environment irrelevant to Planner (e.g., trying to predict a noisy TV). To mitigate this, value-aware model learning methods (Farahmand et al., 2017; Farahmand, 2018; Silver et al., 2017; Ayoub et al., 2020) attempt to find a model such that performing value-based planning on v has a similar effect to applying the true environment model. Policy-aware model learning methods (Abachi et al., 2020) similarly look at the effect of planning on the policy, rather than values. In both cases, this means the model can focus on the aspect most important for the associated planning algorithm.
32
+
33
+ As a **thought experiment**, consider a simple model that looks forward or backward for one time-step to predict the next or the previous state. An agent takes action a in s and transitions to s', experiencing a TD-error that changes the value prediction for s. To propagate this information backward to a predecessor state $\tilde{s}$ of s, forward models can face difficulties, because finding a good predecessor is nontrivial, and model misspecifications can cause a damaging update, pushing the value prediction estimate of a real state further away from its true value.
34
+
35
+ Dyna-style planning methods (Sutton, 1990a)
36
+
37
+ Algorithm 1: Backward Planning
38
+
39
+ ```
40
+ 1: Input policy \pi, n
41
+
42
+ 2: s \sim \text{env}()
43
+
44
+ 3: for each interaction \{1, 2 \dots T\} do
45
+
46
+ 4: a \sim \pi(s)
47
+
48
+ 5: r, \gamma, s' \sim \text{env}(a)
49
+
50
+ 6: \overleftarrow{\mathcal{P}}, \overleftarrow{r} \leftarrow \text{model\_learning\_update}(s, a, s')
51
+
52
+ 7: v \leftarrow \text{learning\_update}(s, a, r, \gamma, s')
53
+
54
+ 8: for each planning step \{1, 2 \dots N\} do
55
+
56
+ 9: \widetilde{s} \sim \overleftarrow{\mathcal{P}}(s), \widetilde{r} \sim \overleftarrow{r}(\widetilde{s}, s)
57
+
58
+ 10: v \leftarrow \text{Planner}(\widetilde{s}, \widetilde{r}, \gamma, s)
59
+
60
+ 11: s \leftarrow s'
61
+ ```
62
+
63
+ perform credit assignment by planning forward from previously visited states (or hypothetical states). This requires additional search-control and prioritization mechanisms. Otherwise: (i) the sampled state might be unrelated to the current state whose estimate has recently been updated; (ii) if the model is poor, planning steps can update the value of a real state with an erroneous imagined transition.
64
+
65
+ Backward models naturally sidestep these issues: (i) they can directly predict predecessor states $\tilde{s}$ of a newly updated state s; (ii) if the planning update of the imagined state $\tilde{s}$ solely uses s as target for the update, a poor model will only damage the value prediction estimate of a *fictitious* state $\tilde{s}$ .
66
+
67
+ We start our analysis with a treatment of backward planning which, in contrast to forward planning, operates using models running *backward in time*. We may write $\mathcal{P}_{\pi}^{\star}(s_{t+1}, a_t|s_t)$ in place of $\mathcal{P}_{\pi}^{\star}(S_{t+1} = s_{t+1}, A_t = a_t|S_t = s_t)$ in the interest of space.
68
+
69
+ Assumptions Throughout the paper, we make a stationarity assumption: for any policy $\pi$ , the Markov chain induced by $\pi$ , $\mathcal{P}_{\pi}^{\star}(s_{t+1}|s_t) = \sum_{a \in \mathcal{A}} \mathcal{P}^{\star}(s_{t+1}|s_t, a)\pi(a|s_t)$ , is irreducible and aperiodic. We denote by $d_{\pi,t}(s)$ the probability of observing state s at time t when following $\pi$ . Under the ergodicity assumption, each policy $\pi$ induces a unique stationary distribution of observed states $d_{\pi}(s) = \lim_{t \to \infty} d_{\pi,t}(s)$ , as well as a stationary joint state-action distribution $d_{\pi}(s, a) = \pi(a|s)d_{\pi}(s)$ .
70
+
71
+ **Backward models** A *backward transition model* identifies predecessor states of its input state. In formalizing *backward models*, we highlight some interesting properties (for which we defer details to
72
+
73
+ appendix A). To begin, backward models are tethered to a policy. Formally, we use $\mathcal{P}_{\pi,t}^{\star}$ to refer to the dynamics of the time-reversed Markov chain induced by a policy $\pi$ at time-step t:
74
+
75
+ $$\bar{\mathcal{P}}_{\pi,t}^{\star}(s_t, a_t|s_{t+1}) = d_{\pi,t+1}(s_{t+1})^{-1} d_{\pi,t}(s_t) \pi(a_t|s_t) \mathcal{P}^{\star}(s_{t+1}|s_t, a_t)$$
76
+ (2)
77
+
78
+ and define $\mathcal{P}_{\pi}^{\star}(s_t, a_t|s_{t+1}) \equiv \lim_{t\to\infty} \mathcal{P}_{\pi,t}^{\star}(s_t, a_t|s_{t+1})$ . One might hope that action-conditioning would relieve this policy dependence. Alas, it does not. An action-conditioned backward model for policy $\pi$ is defined as:
79
+
80
+ $$\bar{\mathcal{P}}_{\pi}^{\star}(s_t|s_{t+1}, a_t) = \frac{\pi(a_t|s_t)}{\bar{\pi}(a_t|s_{t+1})} \frac{d_{\pi}(s_t)}{d_{\pi}(s_{t+1})} \mathcal{P}^{\star}(s_{t+1}|s_t, a_t), \tag{3}$$
81
+
82
+ where $\bar{\pi}(a_t|s_{t+1})$ is the marginal probability of an action knowing the future state.
83
+
84
+ Time-extended backward models Policy-conditioned models hold many shapes and have the potential to be useful in reasoning over larger timescales. Specifically, given a backward transition model $\tilde{\mathcal{P}}_{\pi}^{\star}$ , we define $\tilde{\mathcal{P}}_{\pi}^{\star(n)}(\cdot|s) = (\tilde{\mathcal{P}}_{\pi}^{\star})^n(\cdot|s)$ as the predecessor state distribution of having followed policy $\pi$ for n steps, arriving at state s. Similarly, we denote the associated n-step reward model as: $\tilde{r}_{\pi}^{\star(n)}(\tilde{s},s) = \mathbb{E}\left[\sum_{t=0}^{n-1} \gamma^t R_{t+1} | S_0 = \tilde{s}, S_n = s, A_{t+1} \sim \pi(\cdot|S_t)\right]$ . Other time-extended variants exist, such as $\lambda$ -models (Sutton, 1995) or option models (Precup et al., 1998; Sutton et al., 1999), and could be learned counter-factually (Sutton et al., 2011) using an excursion formulation (Mahmood et al., 2015; Sutton et al., 2016; Zhang et al., 2020a,b; Gelada and Bellemare, 2019; Hallak and Mannor, 2017). We defer investigation of the off-policy regime to future work.
85
+
86
+ We primarily center on the prediction setting, in which the goal is to evaluate a given $\pi$ , and simplify notation by removing the policy subscript from models and value functions.
87
+
88
+ Backward models are a pair of reward and transition models: $(\bar{\mathcal{P}}, \bar{r}')$ (single or multi-step, and policy-dependent). The reward model takes in two endpoint states and outputs the estimated reward. Depending on its class, the transition model can output a distribution of predecessor states, a sample predecessor or an expectation over prior states of its input.
89
+
90
+ **Backward planning** The hindsight planning mechanism we consider uses a *backward model* to identify the predecessor states $\tilde{s}$ of a particular state s. Planner projects backward in time, and from the projected states, performs forward-looking TD updates that end back in s. These corrections are used to re-evaluate the value predictions at states $\tilde{s}$ . Such updates attempt to do credit assignment counter-factually by making parameter corrections in hindsight, given the new information gathered at the current step (the TD error of the transition $s \stackrel{a}{\rightarrow} s'$ ). Forward view corrections can be reformulated as backward corrections under the backward Markov chain (appendix B). For instance, an n-step learning update from any state s can be formulated as:
91
+
92
+ $$\tilde{\Delta}(s) = \mathbb{E}\left[\left(Y^{(n)}(S_{t-n}, S_t) - v_{\mathbf{w}}(S_{t-n})\right) \nabla_{\mathbf{w}} v_{\mathbf{w}}(S_{t-n}) | S_t = s, S_{t-n} \sim \tilde{\mathcal{P}}^{(n)}(\cdot | S_t = s)\right], \quad (4)$$
93
+
94
+ where $Y^{(n)}(S_{t-n}, S_t) = \mathcal{F}^*(S_{t-n}, S_t) + \gamma^n v_{\mathbf{w}}(S_t)$ is the n-step update target. For simplicity, in the following we use single-step models, and henceforth drop the horizon reference from the notations. Algorithm 1 sketches the generic steps of hindsight planning with backward models in a full framework of simultaneous learning and planning.
95
+
96
+ **Model estimation** The choice for model estimation instantiates the above-mentioned algorithmic template. The most explicit way of computing $\tilde{\Delta}$ is by learning *full probability distributions* – i.e. estimating the backward distribution $\tilde{\mathcal{P}}(\cdot|s)$ . Then, one can either (i) sample the model $\tilde{s} \sim \tilde{\mathcal{P}}(\cdot|s)$ and do a stochastic update (or many): $\mathbf{w} = \mathbf{w} + \alpha \left( \vec{r}(\tilde{s},s) + \gamma v_{\mathbf{w}}(s) - v_{\mathbf{w}}(\tilde{s}) \right) \nabla_{\mathbf{w}} v_{\mathbf{w}}(\tilde{s})$ , or (ii) perform a *distributional* backward planning update $\forall \tilde{s} \in \mathbb{S}$ in proportion to the probability given by the backward distribution model: $\mathbf{w} = \mathbf{w} + \alpha \tilde{\mathcal{P}}(\tilde{s}|s) \left( \vec{r}(\tilde{s},s) + \gamma v_{\mathbf{w}}(s) - v_{\mathbf{w}}(\tilde{s}) \right) \nabla_{\mathbf{w}} v_{\mathbf{w}}(\tilde{s})$ . In the general case, learning a full distribution model over the feature space is intractable. Alternatively, one can learn a backward *generative* model, sample predecessor features $\tilde{\mathbf{x}} \sim \tilde{\mathcal{P}}(\cdot|\mathbf{x})$ and do one or more *sample* backward planning updates. We would like to think that maybe in the *linear* setting, where the gradient has the special form $\nabla_{\mathbf{w}} v_{\mathbf{w}}(\tilde{\mathbf{x}}) = \tilde{\mathbf{x}}$ , one can get away with learning backward *expectation* models over features, and then perform an *expected* backward planning update. We
97
+
98
+ find however that a direct counterpart of the forward expectation models is not a valid update, as it involves a product of two (possibly) dependent random variables (the TD error and the gradient of the value function evaluated at the predecessor features given by the model). However, learning an unusual type of model still results in valid parameter corrections:
99
+
100
+ $$\mathbf{w} = \mathbf{w} + \alpha \left( \vec{r}_{\mathbf{x}}(\mathbf{x}) + \left( \gamma \dot{\mathcal{P}}_{\mathbf{x}}(\mathbf{x}) \mathbf{x}^{\top} - \dot{\mathcal{P}}_{\mathbf{x}^{2}}(\mathbf{x}) \right) \mathbf{w} \right), \tag{5}$$
101
+
102
+ where $\tilde{\mathcal{P}}_{\mathbf{x}}(\mathbf{x}) = \mathbb{E}\left[\tilde{\mathbf{x}}|\mathbf{x}\right]$ is a backward expectation model, $\tilde{\mathcal{P}}_{\mathbf{x}^2}(\mathbf{x}) = \mathbb{E}\left[\tilde{\mathbf{x}}\tilde{\mathbf{x}}^\top|\mathbf{x}\right]$ is a covariance matrix of the predecessor features and $\tilde{r}_{\mathbf{x}}(\mathbf{x}) = \mathbb{E}\left[\tilde{\mathbf{x}}\tilde{\mathbf{x}}^\top|\mathbf{x}\right]\Theta_r\mathbf{x}$ is a vector reward model with parameters $\Theta_r$ (appendix $\mathbb{C}$ ). Note that this model requires estimating three quantities.
103
+
104
+ There are several approaches to estimating $\bar{\mathcal{P}}$ , which can be characterized based on the constraints that they impose on the model. The standard approach is Maximum Likelihood Estimation (MLE) (appendix $\mathbb{C}$ ): $\bar{\mathcal{P}} \leftarrow \operatorname{argmin}_{\bar{\mathcal{P}}^{\dagger} \in \mathcal{P}} \frac{1}{n} \sum_{S_i \in \mathcal{D}_n} \log \bar{\mathcal{P}}^{\dagger}(S_i)$ , where we used $\mathcal{P}$ to denote the model space and $\mathcal{D}_n = \{(S_i, A_i, R_i, S_i')\}_{i=1}^n$ to represent collected data from interaction. Learning $\bar{\mathcal{P}}$ that minimizes a negative-log loss or other probabilistic losses leads to a model that tries to estimate all aspects of the environment. Estimating the reward model $\bar{r}$ defaults to a regression problem.
105
+
106
+ If however the true model does not belong to the model estimator's space and approximation errors exist, a planner-aware method can choose a model with minimum error with respect to Planner's objective. Both forward and backward planning objectives for value-based methods try to find an approximation v to $v_{\pi}$ by applying one step of semi-gradient model-based TD update. A planner-aware model-learning objective is less constrained than the MLE objective in that it only tries to ensure that replacing the true dynamics with the model is inconsequential for the internal mechanism of Planner. In the extreme case, we note that one can potentially directly parametrize and estimate the expected parameter corrections or updates, thus learning a fully abstract model. Learning of this kind shadows the internal arrow of time of the model. The ultimate unconstrained objective could meta-learn the model, such that, after a model-learning update, the model would be useful for planning. We offer some directions for planner-aware model learning in appendix ${\bf C}$ and defer an in-depth investigation of such methods to future work.
107
+
108
+ Our empirical studies work to uncover the distinctions between planning in anticipation and in retrospect. With the aim of understanding the underlying properties of these approaches, we ask the following questions:
109
+
110
+ (i) How are the two planning algorithms distinct? When does it matter?
111
+
112
+ ![](_page_4_Figure_8.jpeg)
113
+
114
+ Figure 1: (Left, Center-Left) Complementary properties of forward and backward planning: Backward models work well in *channeling* structures, with large fan-in and small fan-out, while forward models are better suited for *broadcasting* state formations. The y-axis shows the RMSVE: $\sqrt{(|v_{\pi}-v|_2^2)}$ ; (Right, Center-Right) Inflection point: As we shift from channeling patterns (left) to broadcasting ones (right), the gain from one type of planning switches to the other, for both true and learned models. The y-axis shows the area under the curve (AUC) of the RMSVE (results are normalized by zero-centering and re-scaling by the max – min).
115
+
116
+ To understand which structural attributes of the environment are important for the two mechanisms and how such aspects interact with planning in online prediction settings we use the following experimental setup.
117
+
118
+ **Experimental setup** We explore the first question in a prediction setting using Markov Reward Processes where the states are organised as bi-graphs with with one (or more) sets of states (or levels)
119
+
120
+ $\{x_i\}_{i\in[0:n_x]}$ and $\{y_j\}_{j\in[0:n_y]}$ (Fig. 2), where we vary $n_x$ and $n_y$ in our experiments. We additionally experiment with adding intermediary levels: $\{z_k^l\}_{k\in[0:n_z^l],l\in[1:L]}$ , where L is the number of levels and $n_z^l$ is the size of level l. The states from a particular level transition only to states in the next level, thus establishing a particular flow and stationary structure of the Markov Chain under study.
121
+
122
+ We refer to the number of predecessors/successors a state might have in the state space as fan-in/fan-out. The experiments are ablation studies of the effects of varying the *fan-in* $(n_x)$ , the *fan-out* $(n_y)$ and the number of levels l with their corresponding sizes $n_z^l$ .
123
+
124
+ For this investigation we performed two types of experiments: (i) on 3-level bipartite graphs as illustrated in figure 1-Left, Center-Left (ii) on 2-level bipartite graphs as shown in figure 1-Center-Right, Right. For (i), thumbnails depicts the phenomena of transitioning from a larger number of predecessors that funnel into a smaller number of successors, and
125
+
126
+ $\begin{array}{c|ccccccccccccccccccccccccccccccccccc$
127
+
128
+ Figure 2: **Random Chain**: Illustration of the Markov Reward Process used in the prediction experiments. The chain flows from left to right.
129
+
130
+ vice-versa. The channelling pattern has the attributes: $L=1, n_x=500, n_z^1=50, n_y=5$ , which are opposite from the broadcasting version: $L=1, n_x=5, n_z^1=50, n_y=500$ . For (ii) the results reported are for $(n_x, n_y) \in \{(500, 5), (50, 5), (5, 5), (5, 50), (5, 500)\}$ , where we labeled the x-axis with the simplified ratio.
131
+
132
+ **Algorithms** For the purposes of this experiment we use backward planning for value prediction. We include a complete pseudo-code for the backward planning algorithm used for this experiment in Algorithm 4, appendix D. For any transition $s \stackrel{a}{\rightarrow} s'$ , we use the following reference frame to plan from: for backward models – we use the current state s' of a transition, whereas for forward models – we use the previous state s of a transition. The exact definition of reference frames is given in the following question we explore, corresponding to the next experiment.
133
+
134
+ **Context & observations** Our studies identify an interesting phenomenon: the gain of the two planning mechanisms is reliant on two state attributes, which we call fan-in and fan-out. We illustrate this observation in the prediction setting presented above, depicted in the diagrams of Fig 1 and detailed in appendix D. We observe that large fan-in and small fan-out is better suited for backward planning, since backward planning updates many prior states at once (due to the large fan-in) and in these settings, due to small fan-out, backward models propagate lower-variance updates – see Fig. 1-Left. Intuitively, when many trajectories end up with the same outcome, all prior states' values can be updated with the new information available at the current state. This pattern, which we call *channeling*, also abates in part the vulnerability of backward models in updating states in a sample-based manner (i.e. states from which we correct predictions use a single sample, instead of the whole expectation as is the case for forward models). In contrast, forward models fit better a broadcasting formation (Fig. 1-Center-Left)). A backward model for this regime would be closer to factual TD and less efficient. Its predicted past states would need updates from many different successor states to construct accurate predictions. As we shift from the pattern of large fan-in/small fan-out to the opposite end, we notice a shift in the performance of the two planning mechanisms (Fig. 1-Right, Center-Right).
135
+
136
+ **Implications** These results highlight one aspect of the problem central to the success of planning: the breadth of backward and forward projections; namely, we find anticipation to be sensible when the future is wide-ranging and predictable, and hindsight to work well when new discoveries affect many prior beliefs with certainty and to a great extent. Concurrently, these insights lay the groundwork for the development of new planning algorithms that dynamically choose where to plan *to* and *from*, seamlessly blending forethought and hindsight.
137
+
138
+ (ii) Does it matter where the agent plans from? What is the effect of shifting the frame of reference used in planning?
139
+
140
+ **Experimental setup** In this experiment, as well as the following one, we perform ablation studies on the discrete navigation task from (Sutton and Barto, 2018) illustrated in Fig. 4 (details in (appendix D).
141
+
142
+ **Algorithms** We operate in the control setting, for which we describe the algorithms we use, *Online Forward-Dyna* and *Online Backward-Dyna* (similar in nature to Sutton, 1990b; van Hasselt et al.,
143
+
144
+ 2019) in algorithms 2 and 3, respectively (details in appendix D). In brief, both algorithms interlace additional steps of model learning and planning in-between steps of model-free Q-learning <sup>1</sup>.
145
+
146
+ Context & observations We now ask whether the frame of reference (input state of Planner and Planner, respectively), from which the agent starts planning, matters and if so, why. More precisely, consider a transition $s \stackrel{a}{\to} s'$ and note that we could use either s or s' as input to each planning algorithm. To show the effects of changing this frame of reference, we consider the control setting described at the beginning of the section and compare the action-value function variants that employ each of the planning mechanisms proposed, namely *Online Forward-Dyna* and *Online Backward-Dyna* (appendix D for details).
147
+
148
+ **Algorithm 2:** Online Forward-Dyna: Learning, Acting & Forward Planning
149
+
150
+ ```
151
+ 1: Input policy \pi, n
152
+ 2: s \sim \text{env}()
153
+ 3: for each interaction \{1, 2 \dots T\} do
154
+ a \sim \operatorname{argmax}_a q(s, a)
155
+ r, \gamma, s' \sim \text{env}(a)
156
+ \mathcal{P}, \overline{r}, \overline{\gamma} \leftarrow \text{model learning update}(s, a, s')
157
+ q \leftarrow \text{learning update}(s, a, r, \gamma, s')
158
+ s_{\text{ref}} \leftarrow \text{planning\_reference\_state}(s, s')
159
+ 9:
160
+ for each a \in \mathcal{A} do
161
+ 10:
162
+ for each s' \in \mathbb{S} do
163
+ y = \overline{r}(s') + \overline{\gamma}(s') \max_{a'} q(s', a')
164
+ 11:
165
+ \Delta(s_{\text{ref}}, a) \leftarrow \Delta(s_{\text{ref}}, a) + \Delta(s_{\text{ref}}, a) + \mathcal{P}(s'|s_{\text{ref}}, a) (y - q(s_{\text{ref}}, a))
166
+ q(s_{\text{ref}}, a) \leftarrow q(s_{\text{ref}}, a) + \alpha\Delta(s_{\text{ref}}, a)
167
+ 12:
168
+ 13:
169
+ 14:
170
+ ```
171
+
172
+ **Algorithm 3:** Online Backward-Dyna: Learning, Acting & Backward Planning
173
+
174
+ ```
175
+ 1: Input policy \pi, n
176
+ 2: s \sim \text{env}()
177
+ 3: for each interaction \{1, 2 \dots T\} do
178
+ a \sim \operatorname{argmax}_a q(s, a)
179
+ r, \gamma, s' \sim \text{env}(a)
180
+ \mathcal{P}, \mathcal{T} \leftarrow \text{model\_learning\_update}(s, a, s')
181
+ 7:
182
+ q \leftarrow \text{learning update}(s, a, r, \gamma, s')
183
+ 8:
184
+ s_{\text{ref}} \leftarrow \text{planning\_reference\_state}(s, s')
185
+ for each \tilde{s} \in \mathcal{S}, \tilde{a} \in \mathcal{A} do
186
+ 9:
187
+ y = \overleftarrow{r}(s_{\text{ref}}) + \gamma \max_{\overline{a}} q(s_{\text{ref}}, \overline{a})
188
+ 10:
189
+ \dot{\Delta}(\tilde{s}, \tilde{a}) = \dot{\mathcal{P}}(\tilde{s}, \tilde{a}|s_{\text{ref}}) (y - q(\tilde{s}, \tilde{a}))
190
+ 11:
191
+ q(\tilde{s}, \tilde{a}) \leftarrow q(\tilde{s}, \tilde{a}) + \alpha \overline{\Delta}(\tilde{s}, \tilde{a})
192
+ 12:
193
+ 13:
194
+ 14:
195
+ ```
196
+
197
+ We compare the pure model-based setting and the full learning framework (learning & planning). In the full learning framework shown in Fig. 3-Left, planning backward from s implies the use of new knowledge about the current prediction, as we bootstrap on the value at s that has recently been updated: $\max_{\bar{a}} q(s,\bar{a})$ ; in contrast, applying planning from s' achieves a somewhat different effect: it complements model-free learning (as it bootstraps on $q(s',\cdot)$ , which has not changed, but may still benefit from the reward information r(s')).
198
+
199
+ Contrary to backward planning, the forward counterpart gains from being as anticipatory as possible, by planning from the current state s'. The effects are reversed in the pure planning setting (Fig 3-Right). Specifically, the backward model cannot rely on model-free learning to re-evaluate predecessor predictions with values at s, since these are no longer changed by the learning process; it thus assumes that role. Simultaneously, backward planning is more efficient at state s' since is benefits from the current additional transition $s \stackrel{a}{\rightarrow} s'$ . Likewise, forward planning is more reliable in s by the same argument, and also assumes the role of learning.
200
+
201
+ ![](_page_6_Figure_8.jpeg)
202
+
203
+ Figure 3: **Planning frame of reference**: (**Left**) In the full learning setting (learning and planning), the agent is more effective by planning backward from s and planning forward from s'. (**Right**) In the pure planning setting, both planning mechanisms assume the role of learning and gain more by processing the exact same opposite states of the full learning case (Left), remaining in antithesis.
204
+
205
+ **Implications** These results emphasize that both planning mechanisms work best when they complement model-free learning, if it is used, and both can take on its role, if it is not.
206
+
207
+ <sup>&</sup>lt;sup>1</sup>N.B. despite the tabular setting, learning is online and planning uses parametric models only in reference to the current transition. This is because we are interested in insights that transfer to more complex environments.
208
+
209
+ *(iii) How is planning influential on behaviour?*
210
+
211
+ Experimental setup This experiment is done in the same control setting described above2 .
212
+
213
+ Context & observations Our results provide evidence for the following observations: (i) errors in the backward model, caused by stochastic transitions, are to a lesser extent damaging for credit assignment (Fig. 4 Top-Right); (ii) backward planning accelerates the search for the optimal policy in the presence of stochastic rewards (Fig. 4-Bottom-Left and Bottom-Right); (iii) for extremely stochastic rewards, even backward models fail to capture the dynamics accurately enough (Fig. 4 Bottom-Right); (iv) model misspecification affects to a deeper extent forward planning.
214
+
215
+ ![](_page_7_Figure_3.jpeg)
216
+
217
+ Figure 4: Information propagation in stochastic settings: backward models can propagate new information faster in stochastic reward settings and are more robust to randomness in the dynamics. Planning with the true forward model emphasizes the issues with forward planning (planning with the true backward model is omitted, as it depends on the constantly changing policy).
218
+
219
+ Implications These results emphasize
220
+
221
+ the potential impact, in environments with high stochasticity, of a different pattern of reasoning, related to counterfactual learning. Particularly, an agent can *project back* to a potential causal state and *rethink* its decisions after each new experience. More investigation of this idea would be useful.
2110.03618/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-05-16T18:00:36.800Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" version="14.6.13" etag="uzxzlnHi625XFS8qPDjf" type="google"><diagram id="bXdjpW2gE3ygbjm1Zuun">7VpLc6M4EP41rpo5xCWEwfhoiD27VbOPqhx25iiDjKkB5OURO/vrVwIJJIQDcRw7k118ALWkVqsfX7fAE9NLjl8ytN/9RgIcTyAIjhPzfgKhYc/n9MYoT5wyh4uaEmZRwGkt4SH6B3Mi4NQyCnCuDCwIiYtorxJ9kqbYLxQayjJyUIdtSayuukchXxG0hAcfxVgb9lcUFLua6sB5S/8FR+FOrGzYfH8JEoM543yHAnKQSOZqYnoZIUX9lBw9HDPtCb3U89YnehvBMpwWYybAesIjiku+twm0YzrV3RLKgQpYPPFd23+XRHTc5ZVNlnQAtPbUsG7bT59Cfq8YbV7FxRNs6B42XdaUVospyFBZC9K9UpegDfewiwr8sEc+6zlQt6S0XZHEtGUwaaI49khMsmqeubXYj9LzIiM/sNRjVxeXnzsmtJq1H3FW4ONJWxiNhWlsYJLgInuiQ8QEg3uJCAvh7ofWxyxusJ3kXhYfhrhXhw3n1vD0gdu+3w/MHj/oKjMNlix2aMuPUZ5HvqpDfIyKb9Lzd/oMphZv3bNNAtF4Eo2UyvlNbkizWLOdVrXEvFo4HIRqQOakzHyseHaBshALRY22iKxxcFrjGY5RET2qQvSZga/wJ4mqoOIGhzZUDN7gm2BR74fPkqO4y8hRGRmzDqNaCxqjyiuabY9ylNkQYPhNsLShbILqkknDkGDYA8CS71H6Ikag7qjHs56UZAmK+3EHlTliG0+wv0NplCcS6NQrC2nAdqREvQuDsdv+NPHgxKUphtoI/H62Ek5qsxwF1Cf5rGT9lH1QrWptiKwC+8BmudMxAbNw8wnUShK3z4PWH22EzycF76IlxaavaINjFSRRHIUpQ1CKQJhK7LKMEdHKYsk7kigIGA83w1QctKn4Mcjbs+CtwtlyJ9Y9t4yUmhzAfiI1cUlMNVUZLHXFTCwX+T/CjJRpIFikJMXPJTJeTHGR5NqohVPz2QR3B6bQtucKVN0Zl4FSq8N1pnIg222OX4t91nsvluQY/ODFkuk4NyuWbM0PmmzxFaMsjdJQ024VajjgwTykYSliHV3ja8dbeZ6mcR7AcvgbF9D0naPGlrHQNd3QZFV3i5BzVD3XVL1MGV6+qbr71arawJ3fW+v1FdQNLeN66nZ+rmNAF4kcH/t+HxKtlrbbIJFE99bWylqePlC8m6MCtfi04xfnHhbM+SCryx0XFq9OmVUppmU6WnKDc6tuPfmy3mfr0RcvAU6XlX25HyUMktJNzm5fqJuw1by6fN1nOIh8pqvVgJSDGf46Nen5tefz0XlmTercrCY1O1zhG9SkInRHQjZX9PvA6wBhZ+tXo4J1xLZYjcop/BR6jqnIfJjRB/IA2HZVFv2EYD6/HJh3WV0OzA2jx9cugeZnvczQuIS3BXFQdwv0XnXR2/uPoXcnHM9EbwP0R+oV4Nu+BnzrH2B+TfdloVmf6q5QDT94SuGk8Q7Sd0xSD1LyQR5qh58zEvgLvo6YnZfloOfAb+qYDS9wLDL0zyN/lMX/Zup7L7MwpsBctJd9O6vp3yo+UGwNw+tLXqaB6Uyy2WKmGs3WX0G8mdH0l6wfKdQuaTUDWtPFTaxGm+1fFOqs1/7Tw1z9Cw==</diagram></mxfile>
2110.03618/paper_text/intro_method.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ For the meta study of SSL, we covered but are not limited to all relevant papers cited by the review on NLP SSL by @sogaard2013semi. We went through the leaderboard of many NLP tasks and covered the SSL papers listed on the leaderboards. The papers covered by our meta study are available on our GitHub.
4
+
5
+ For supervised DA, we searched papers with the keyword domain adaptation and task names from a wide range of tasks that use supervised DA.
6
+
7
+ Note that for fair comparison, we do not consider papers without a comparable supervised baseline corresponding to the SSL, or a comparable unadapted baseline corresponding to the DA. We do not consider MT DA which tackles the out-of-vocabulary (OOV) problem because $P(E|C)$ may be different for OOV [@habash2008four; @daume2011domain].
2111.14792/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-03-06T17:23:54.402Z" agent="5.0 (X11)" etag="jJ3yt2uD0hEmi9CDu-qT" version="16.6.6" type="device"><diagram id="j5cLgJ53NZgyZqXsqwoV" name="Page-1">7L3ZkqNY0i38NGX2n4tKYx4uhRAgxCQJieHmmJhnEDM8/b+3pMgxMiurM6u7v89ORFekxLgH9+Vr+XboP/BtOYvtrUnUOgiLPzAkmP/A+T8wjGIJ8BduWJ4bcJx8bojbNHhuQj9tOKdr+OXGIQ3C7otNfV0Xfdp8udGvqyr0+y+23dq2nr48LKqLL2/a3OLwmw1n/1Z8u9VKgz55bmUQ5NN2KUzj5O3O9GtHeXs79rWhS25BPX22Cd/9gW/buu6fn8p5GxZw5N5G5Xme8J29H9vVhlX/MyfcfcdViwr3DXKbm8z/TU8k+if7vMp4K4ZXf1+N7Ze3AWjroQpCeBH0D5ybkrQPz83Nh3snMOFgW9KXxWt3VFf9awpp8PVWpHEFPhdh1MO9aVHo4Ny0h7aAI2BT17d1Hm7rom7Bpqquwtdxb5v+wHCW3W4F4e3qr5Yh3/b/NSRj2Pbh/Nmm13iIYV2GfbuAQ157cRZ/njJ9mlkCoZ/bks9nlXoN1O1lTfHHa30acfDhNejvT4Aor+1uU41cUWmH8k9edXHuTwL7OzOA/MQMfDZ2r+H8cozBgDII/P3+dPlgSENwLAeHMgW+sHnt6Gt4tw5OYRWD739Sr4t8dnHq8QO2x8Wt616N/mj8jy952PvJ60sTtikYyrA9f7wq3BzcuuSj0cEvxq0HB1WPLRiCfmUN73jD37YG4s2nX0CF4R9IYHuffrBvjAXFyHeMBUX+IWPB2W9sIwwAWr2+1m2f1HFd3Yrdp63cl9aTDWXzmnEwFNynU5Qazu1juLOw75fXQbehr780sHBOexte7QPLvr46j68o9babn193e3xZ3r5UYAzsz788zyPfvn467fHt7byvjPR75hZWwQbiPbRfaHmpbyZp9dwhpMXH5ldvMYZ4zzWQx8/HPW+Q/9Yz481aX237rtl19dD64U84fn9r4/B1biL3HjacMe58y+w+LULR+b9/Ui8TgzP9QzNuw+LWp+OXces9G3ydatQpaPRH8ydR4gvzJ9APb9b9dpFnp17nfbJlMOy35bPDGnhA94M7YV/dCfnKM54X/OQnHzv5CziL/De4zv8CD/jY/I9+T/7Q67/rIX9t+MS7hv+lb2GSsR32wbbaZX0YJHq0dsyfKP5v8RAW/cAgGPLxB/spb/kXLPrd4SF/PRj8d5jw91jKjwzxJy39s1D1ZayiEfrHsepfxfp3p+pnLZ5k/j1QTxPoF4aMIshPme73gf53G/cbeftduuQvma1X931dQp/4qE9o5FspIiAC9ZAiv4FvUij5gaG/oZzET7JMhPpAUL9ONN+VhcxvHv6/loWfjTFP7phHaPpZufgrc/HZMOPMP6X83h3jb4VfH3bA3YQGiCBMQIFEoz88Pn818qBL/d+07jINgif+h1263ryXggbq6+nB4Lok9wfJw2sByO+eU4X+jD5/2/Rphsnf4x4E/qVrkO+5BoW85xm/PmNUxxF775QS8W4COrZXza79k0K/mYt/R8R9BbG3zz9Duv7VKP1TIe5da34nxL07hm/psX86xKHkl+ZDAjVDfablmZ8KeN9e9g0OPlHAL3MEX172ORy/LJhQ9MswQdJf2fKvCab3jf0dfPoDowoIPd3gfeEG1H2o+xcI/PkEjw1M0mDN/DCqt/3gUwz/BTYAGoV8+PDh+eHtuqChj0s/j/ptt6vev/7bVq/9pav/rlZ+r5G/FHe/jKxRFGG+/x7XDiiPIt+SeS8oQrE/vkyyYb8B1XGU/Ip6Eh9w8jMHor7F+HfYDzCe3xGY37d8/BvLH/+9lv/bbpf/P8t/WT7jh+9bvseQBIn8BywfB1yfRD77eYfdvG/5+A+i4a9ZPvnXvP9b6f29BMEntuF8vu871OOTViewz7U6+gFB/iKv/He0+l/yjs+Gn3yHXL5t+1UFjn1AqM9gD/sSFEn8A4EzOPO2m/7X2ApGUV9eF6c/4J/Tla8yVr+JrjBf9eZFJb7fSvqr49HfSm/aUpaQCDs59+lebmyFGPXbn+8Y9n87lf/Sn5C/8KefovI/pOh/zeXxf4TL/212zH65bvclO/7LwynkFw7/0ji/PfkrVyDpDz+3pPK7zBz/n2fm/4Bi/UUzp/43mDlB/Jfa7Q8n5wsm8mSJQTr+NNv8LtWEB8OrvFUbvR2H/oDlNrfqp25MvXdjHpATvwek8zOu+7jgNxT42bRvNj+6/eXWnxwJ+kcD8Xkj/j/11gHFgJz+3Gradxr6f77XpO+S9Z+oWSluXlhwNz+PH6d9P7X4IOY/oPDfIf2/gb1/nYJ5K8X4nK3j39JF/Cvm9ttKQf4n8heG/SgC4Jl/IoAD43+B759NPvPHN0vf8OexHPMT5UI/vToObv8vLf39kCP9dZRh/yuiDPaVnaPoD+LG7yrO+F/PUn62mu1TWcb3DP3fYYn/zCr037ZEDPnKEsnfaok/HKRf4B3Id7Jc+vb0SPgJ4K/e3nxwmS/yf/+BaPrtiuoXqeDvJI9/RzT9ehmF/fBO+uutRO6L9Bf7zyV+Uex/PRD9/bToLwAO9ZOAQ/135BG+TtD+SDH9NryhfovOeQ9vrmk33IrfoiC+e49d5dcBLNr+vQsKf4GGXXJr4Memrf0Qlpj/FSJ6H8FQH/oircI/3ituAFbPfU9JkPD3HS3yTpjG8W9A4TcgJoZjH740Twx4O/Ht0vPnguSdehaC+ueWD9C3ut7/ISWJf3tx9GdUxr8A5r8AsczPQiz2j0DstxhKfLW8S/xqZeF37oMw/wFy+F5Z3G9ISpnh3H8G1RBEf2P65/17/gJy//fBNk9x71PW92D7LavwHmz/I0DNoh8Q7L8KqN9yDf+6ITN/EcF/qiDgexfZPHd/ZlYfiwLOQxyHXQ9A+He4xvd7gdyqbgrbRytwOJz/hFD7Krf1VVXsd2s436m+/C2S7CtExYkP6Of8gvnGbEnqHX2GfqB/Q87z3aJw4j9DLz4WJtBvYdz5bNf7If2zqYW4VNUKFOWfzftfPfyw2wBMo75gN//A+u8PHzj7y8cV0H9mYezb526IL8sYCObDmyn83YoIkvoyqUR/RU9+UwUESX+Z3yB/vMZMMsgPDv9lMvO+M1H/WWfCft6ZfsMzR7/ohP8GZ/rpAqV/NglCUl+Sl7+ojvjFwxnm+4b+jpd8CQMk+gH42WcB6ucw4bc50H80WfhVNEJ+0oH+Xk3eVw70vedOv8/Hf8FviP9hfvMV4r89UvezkP/V8X/TFVCM+sD83CMNv838yf+s+aMo/kWKBUHYn/SBXwoif98Hfu+jpD/tF/8QOfv78eSHdv7bj6d+JaL8s270fs70LTP337/m9PcWsd+p1oBB+jsPdH+VuAnRgAzp99yKpWj89iNd9Fcx7adc7/2pQt9Z0Pp3Psf9d32Ppr+u5fjBc2PfnE1944n0B+QHz7r9rASjmS8jJQN8Dv32AaC/UGS/zf/QX16BY97LiZ7C+LelqL65uKoYfy9v++5V/nZK6yceu/m+v34NDd9z+M/fDfX5G6S+67d/44F7AO9fP30GtiA/fPoMJz+89xTOP7aG9r//IeN/YxwBRsUExHt2yWAe/ovSnnyngPx9Kvbvrm/4LvZSFPHh80fOvnwNCAEENfPt3r8L8BTDfCCoT1fBvrwJ9sOb/D64/9Fj6j96wO1tmSotH2+G/JiHfySPjLpL+7T+4o0h36m3ei4pg58/vvcevVvXPN9YGaUzdAjuccvN21bkbQv4HNz6G4Dy59fXKyK26ZXTTxNyEON6A3608yXZXeLNhtM78PVw2G4c8C/XkdofGMdtNrwT7Yrd8XoiKn0NQnu+sB4wTYEJQle2uyWf8e0uzlxMCXfKUdidwv4s3K43i0LRm4fewaFWdbVaG920GI71vQb2WHRkYbeyv+JCMMnWwiU721I2x/154WlXY0tw7wg5ibvddu/GkxXQxryoxsIX1VrNpZxGSkjTEc1EU8jPa2yrx93p029FK0epP/Zsibj79NOvM8ro5vzZll7WHRf7tGEXj7UHxufjpc55gXKOvHw6BMm7uuXUT4eEeSFoN/nTRTNkF7ftZ83ZhXEpBJ/ft8V2U1Z/OsAVwmMpBp8OSPYKJU5m/umQRg6OlhTtzsB7OXY2VpIhc1j5v5I0O6wNWhmPMkGuh3PtjnDdmfNWHK/GIOyptqWpGxxVo+rwx4FCNIDv2PXWW50djXhJsRVpNHc/QPWGJIi4KlcQxARjfB7O0hONw89RGDJLZJBs9NzR2uCCXAL+u5XNveRZeFISRJGxw/WclaQNLd1MFLdtLAT7Ho8rrOAP6AFdUYBggM83pwiQxMCBmhodPCAI52K/rpQzLHs9g25EN8AJMAF9tNs1cNhBAZ2njRuEYZq/ei8ai9oeqP5ArsP5Ru9wo3Neo7pNuw3DZ4zHwVNtZSRnlmF2Il5lTC7vzYmRQOQArcNo1psjlhxhD/dOcSbd4wU1zBruNGGfM0CKuVkNpZzcg4kdz+BrcXSsXQJnYU2AIwsVA35oI3K7KEKZC3Zby4iffH3GQjoMOhrOU9ajzSJvEnl7KXI3dVINHdznGIVjtFIozYJ5ufWGDR0JsDIuJ4XkaRLUsevGlIo1Nl3c4yDxCb69tfYZ7T1RJllmquFos1sD9slYKDKacdy+2AGLolc69Cn0TIXIKbZxFykEebxeXeuUa/yE5UFIP4ylpZpBiQm7wlHUIlgXgzMieaMp2ZF/tBtqAI2cnc7u8Ba9tYe1mx82WS0GHrKbsCaFp/Wa1Ka+opMiF8u0JWB7FAXcIFzWlUrv91IplsbyA5Yk8VJlpJgZx+h4BlqEVnJUM/0lNNyeiydvJupu9EcDPmsiGAGNU7aAkkR8AvYm74CVL2flOmAXQT5POfu0iUKzWpTWBBSMyRXFcd8qCpQ4TIrbY3j2GG0ujOQVjpMEn9HgIhCyRD8c+n6V84XbHLbxnQA/dLofFGJ+2j04buNvTpcr/KIGz42bcO48kWE1WZHXRQyfV2cLFqODSZtgyYDvEyR0rK1ig7/L9gGnNLYecdgn/WF0mELpSb05UMAP65uYNAR1s+x0j2lkQNI0ZitFivuRYWgjcBqMTnwFeDwJ3crEtVW1n62hruCPsjJY0dzOO0pvWkVpB/UO80/tOcJYq4VHGOcHqszwzrD7vKqBUMEZwKIk0zQvS6An1gaelY4tFTZOZ/nKusK7QXMfcVo05XwrRb5/DE0ZYcRJRzE4nOGzHSRiwAOXIzm4JdktLmBBXMWCSQ7hEWtjwjFYjBhsAh8QzFxpUhbAnJpXNLABQCn5uWdHDVeKdYgMXaeFW3vh6VR53UG/jAJ3vKhidtwrNI3DgQ99nTbDC223KGxpujwn4ywJcxcF82sKJWPZsIaLPO68nKE1Qz9DOQZAK4NXb/bxwELDMGGHWxL9aAOJH0VjnAjAELjNcadKXmfpYN4mQuRnkrHurGJfkcCEo8Vq4mi3d6tvHfyCCqdhCC/WpdDMjZehaDG6I/Ckh0gWDGSwlRIAc4Wo4npT9Wc7GOBf3Np5stNbiOtZcw9htHi2Bnv+Q0SCU0rPzyPuB5qmJftQChGdbi6DzYzArVDNuubu5rjhsrlek77vh/1CDHyZRvo4dh1EyMUHfwIdgPot3ZcyEcjQaQVgMVxm42xwPNWeuOavOxHc0oO+namhaYGnBiscNcv2IRIWxaRPAXSEMBrPmbQKDZKaZ6SXiWe3MNw3Yz/jOjyrxnF0IwcT0s2FJsWGVRxMTcZoVL3rHb2dbG5S5cD348q2cYYd8Ggks+dUECuz38Zt2gdByBAee0J0HI4bRcTLMAz6AbFhmKoQpAAseDwagzIE8EQ8u8/+4CpVBUNZCNp1hrMyROYZRB4EuQTyGRkz5AJcLtyBKImPBgV6iwE86my1WpkoZex0dOm3mClVsP96D10KaU6Xk8hzE2Sfawa6px3y8y1Ch931Fd1N0xGLzjPOwxa3dT30H6NF+hueZhjVyMCorKsZkDAu3sTmvk/VLGaU7bEgSdK96NC4qrKEBpAG7SBnNT34u5N6ATd42RM64F5pw3k1gJGulN9bo+1Do2fZR1w/wqG6L65ahVQ4m24NpkjI5sokJZV5zvE6z74Nzjbvl/shrqWqrfem22Hq1lEtY7+qS94eFhVejUZbcDmzIF3cgojqDbgPosV4aflkIqbD7HZzTAfpBNzWZij4YIMwDQUS1uw6sDjDOLDVpDyva7rrFQdPiVKGDgCiMiEdyI2UFcU1IJ5WNvosG1Xwi/Qgp77rRxJ9lk7rBdXOYG65rbn4Z0nC6Tq6IZ449ySj7iTMI1l90cRsu5lVV5KkaugWmNuY4ThxiZCit1kHoZWuuq4bnCfrSBzrgTkvzE9DiYdoInqtv3r9RGzECRMfAORPJEAlKQS0HmOB7TzjgTC/qBMeQeTuCJZFl7SmNBCQCIAUYJjF6goNSSBCg+fgKRUIP4yrWfWldEuX1pYH+xuHAXYa2BYDy5u41XzgzAMnS5NLNtWI0br4uC+RaGto3yiKTAY+dqBl7Yzw1RKWhzSs6mAmmyPIoqo6KYIv3ecezJEJDUB1sGA9w/jAHDsYozc8vBej6qPVWsYA7ukHwfWq8UeE0V643FwfyI8hE65S1YV89n/UIzDBngclBqInOe4lc+FaF3jz8GwaBkQgPxhAnANUi6Z5HXjv/pDEkzvdXa2kVykwnFvltwvpj8AAfQOOyuPO+uGBUoplYcx864/1Ltmfb+LcyVu/4gi+JEt38XF1LhlcLddoLPSIXZ54zk0DhuMM2fcsLmpB4B8RUudb7eHMt+12oWtm6LGrKCR7OHDoK84K8VGVzsBMJ+wwM3v/FZPES2CbwjxDSzalK6pBgyHXKpiG6/Jgmp6HSyCoOJOnAVIAu3+bPVg1xQFGDoh1H3nzm4io/JhjgYUowhaEc6tOtQyglHy07IWxPI+mGykokGBjHhFX2PIrTewVrVxvBxBtMLCfco+Er1c2r0+OKsHbg9iOqUvgLsI4ercbDamjMBEXGTjq1dpI9HNQWDbrrmndHiamZoIZEHkbBOTiPLQtRVcAWAdHl4S0YJiHXZNMNN7WapL6vQ94GsuqZ2RQerratjiNHtDbYKsPgRf1ug4w9w7Y1V6nGAYBEfPU7Kyivkk1dOU79YwOYh3xR+taNHa42zHcY3Ygwvr8PbAlz3Wvp+v1KtysvPFtbnlEcoJCDNp3sVOOaADs4xcX3BmHhRlgx7hU4rlqS3SWoh4nHrKLLWQIzLQnAp3OgL+PBClrxv1QNvtz3qTnMIq4g7mExyAP/Cs3Ef6kF8gO8PEpMhtkztkwO61kNyUqvz0uarJvBafdwtCiDZjnm9KM91LG4VcsCIxzXh96lqQOK7Cv0twDAhXmlgnUHEK3UQNs1ACclK1WOZ4NQ9nvp00up96y8obBaIAF4mAUBTkWB9hfkhLS2XngKtQVpwwndtNBFlIiVVJKLxrn+rJKNZIhfeEi26QZgG2l2eMKjKDEdGSdAXcAgjnahgJW5ljnPNlvN/HBAf6/XChXJQKAyJSF9c0NONZRX8nyWlyuod3QuH6sdb707LYAPrSFXuCE1bkJ6FMQbMINvAdFHTSoq49l0x7GKAoXhgp0QtJtSjNFEEvEHdB7p1PZkZXxUrT+aSbJ5HBVClqOp+N+GA1uW1UAlrzD0nXZfhGhVORGC+mV4+QwjurxnneADfAu4RZ747VgEg8iTzKhDCWxbikFcFcI9MmSDiOpeJ4XGzGM8qQHvDLrC+UWKV4brqEhpKvorJDtu+K1UC1Ibg2RAtrU9ZUEWEbkuHwd8oD+3KHYHIYLuDII/A/nhTjacRLrsqwmSBuNOAWWAaLODtMa74wU+6eh6xvea6tCSACxopEE66WmBLqq8EtTxn0HrRFXhRiRH49jXN1IKpDrqt+xgXG7D+1A7TDcrBb5bFbh6FyuEyCkklryIg98t4uAk+O0nM0TOZbSloJqlk2dUt6fkS47LIPCsiSdzQgdDNyk8zUzM+lpP4mTtV12Se2eH06GdS2AWQ2vHfdmFc3GyCCoQ1UMkd2I120yixmXcs4p3kDeuWF5C41dRnBdtTSV14F1MML50SVj8LMkX5O7AKKsTAOhBFixD8OHgbOiyT4CcRAEHJB0lzKo7sPawjvmzda8ATN0xmoidTwa2I17PM2s5OJej8X2JksQetDP5zNBe9D77Y0HPWMijpsdN/k8lLiAZh95mtpleg5IUMUXYKaf2hKqd+hWT9bfBUXXHCwQ4MAYYDuHG+lnNxrgDVkBjxyrShHKhZMN+QSauhtg9KeyE1S7AyMDYlDiazX2tQmGPD0fC9kV7YIJdckcGBvpAamQwljP6huOIxD4KG/cOUBXrRuPd3gbh4+BHrbM2FBWkUANBK9cVdWg0/rcWUXvWV6xyEf+wWLhMB98RmbUOMvIbVjVtERQYtnUN7Cfuq0rTWcvB3vivFBiVhjRgdIGvXkye0mKEtSoVopd2EhfPLLiuXyZjd6XOCQQEybi6SMubXKVP5yh9SHj6r70/imP9wH6SvaMLuwlUGuLUo4KQT8Imh9BdaHDseZkbKBxzxzh/Dgzou3MZqWhP0yrj2tQVUkK9GGKPx11A9vQ4llvo2serXFk4WBKThbQ1QQNKMQVyE+qcLpxbFJNXMPF8UEYnlx5Zu+Ef9afHVYvoyqP2irXaDr3w7DfAS2xnJauPewYcvZ43XDAkX7ABzAPMvZoqnvqiFvGSGLZ3ufPe11qopmlq3tIV67KNhdhc4TMjVMHSzmrtE4OgY6H55LsFa8Eympdp0kS2r19yzs4vVGBklQKCQRDzOwTE91CPuWOCNUvKQloxbSneT2hVAGUoi6KF5bG0Vf2oXNZPKhY4BM5sNRpVYQBy7TVt/f3QwqkCoKw7mDvi3XFcQkoXj9aGXJlq7me3Fa4o2mKen2zjSrkkfAALAtIqayqxkEPRb8K8KqLeoO9NuBeORvVmD4umNGtgDU5Hn8rTXcUy+zaWgpKaSFggGeV0OXUVbEw7AZqNKYbjfsEiqIr75IsOzcEG8L8wdMozjCV56EWjPM9+M8yox4hMWvNSBZ6dwYZ13ga7kJrL+orgScvdSwCCoSf5mQPWN0ul7dHn4X+WAC/tE6VO6vA8/ZB7Wl39HyPo8iEJBOIbn2dRxeTLORcQL0ZVO5RGjaMokPDclZFgQ4NrG32n8izuaV9CwHueK1tfLirSDSyLHzJ8gzbHMIpTC+Ih9IsSSAtP2A0GA204FzLLzvLM4jVgiM7sVcbTDKHaAZCwwS/sHnEtenYsIZsmlEw71TD8E+CgOJ5ZC5VSFBxveXGHRH2Fh1QEgSv0+7qBVSY7msRsGML0K9E12EbJhL3hIyDHxkmbiCKMHQVPhgQ4LLjHkuy+MZXEWIlkHcA4HhlEjb02ZiVFuBX5+q40/lGa1CBeHIhmGH0WYWK6FQjIRBxkfSWyTohV2G2yoDATwzjX+xH/olZbpozACy6HwDfY3JUzE69BELFfbC9rlagpKpIN4iaFsg42H3mkQ7JqeAAcxfGWhnTI2XTwI4o6aXgrDTiqwGBTJF6XmW77lxeJiti1CuVbAP+QA4e7Cq30pRx3gxASXmXAoXZYzDIhqquR1QzhWptppVKZ5bVx+P6UMAxIDUq3gZZRWrSM3PCkZFA6Al6DSgvipAXjh1ONK+lIFpTOMYOOfGR2wvzDUWgJet7e8Se/OPKQE3piqfc160r5FuHh7LGNNhles7wdVnOUYd7AzSJsaVUuh2TY2d3JWD1GU1RW/2Q1g3QB2uHt5iHaYkz14iXMNd0fz7midFS4UCR5EyJkL3CffqKj4hpzVL1ANZAOSNtBmb1kcRIIAC3qGG0dY2AWdqXQARN282kNbeypkgvB5y73vUb365wHBOB7KCxKBiqy6YKaOwBVATMuT4yQBBXR5gdc6mA5mse6PmQNgxpt1tBz7aaH2iCQOp2EZgZHBG5AwRpXsOwLAo2xbxT97T9iYmkeTSAel3CnSMjbLPczPQGGB62d7isx+n0WB/2QHyz/jbkB02qsD4YaH955G+vAcxhY9VFF7PzHjIuvGpb5ebqVdj0gXSgIHpUl/tVk83dnKMUGeASME8AsfKoNA3JpuLGUYSFYGizpi5o4S5uP455YV0TD8RHaCSZg2m09oQrmAUEmlm9gTjWHy8c/9AYjxUISAk531YxZbcV2SwpK7OprsTaFo9UkAJL/YCFQo8MBq9d59OgZARbkv7iDwUxPsbwtAukkNCRy1o+wmaSU0N7h5YqGhEyp879gOMlUFr+jY7YO3ILkk6Div3ySNG5m3icZk4zdlzsbHccfwTCpLldNjbQONw17opMo/AbzCbfX0xzrxQgApwqYVYvwElcNNQRXeT5MqNX5oW3ytk4et2i3IHNLPf7vQZek9XMHvIe7thZGx3LbmqrLM91HJLtm3Z7hGGAGTMa9B+/V0tRguhJBwEmp3s4aOuTdAg8ELIZ5gHtMYMRtE65a/CPVDhlxfvNRuEc5XF4ocw55nEJiE9LArNL8mWwAqkNaZzGBwbo6yuYg1AIoMHROkQ3FqZGjAzsCOwOM4DEoWja9Mab3JvnS69A4tS2LeNPPEc+WyPtMI+l1YCmV+nGZ4jlBEMENDGZ08FCg/B5eXC8LDsa/c1BEZJjtg/fBpIZzmk+lTvsoZVPLjaQ+GNWGxyHz4pw9nM1LstW+pBscnNRcc0NrPs1mEF8xCXwUVhhoDVR7XSRKhxIkJsq+saTyelviH0EqMRtgD7iNqvbTRvHMIEimGm6faS+omKmtG2i5Og6DmEYPvJrSwQDHEFCvqfoVUMdJ0+rFzGJvTGs08pJYvGVrwPsdDTU+rCM2VuufL8FuhfooSg40uG9o23Jvj5bje0Sx57Roh+BgBa23c2QXklC8D/1+c+Oy92zxcV76clyIHBoGCCDuNcRim3bd9izsSFgE/WN9FhtOGEhpCRCVD2yS8SRATfYbFLG3gIG7T1wyCdglRpnliVcZeDooNHJ9UKVIfuxCT3QGgcBJid3p/15yhGAG88Y+8hHzhafAWKkcnBNhfa2mYf4gIm3O7rHaAxEdQ1aJVRvCgc0p/zQ2ChGqHPYr0I2j3wcT8wSGlwcQD8M1xvHxfRqIAyMWB6JN+zMvIKI1Ei1JxFovlMMF917ePA2ozFbdO4IZjEgm3bf2Qq4NdVfGFbj+FXu8JXxr2mejIz34ilauUbrGLJsNgO80oQqgKwHNwhGFAGSj6XneDPZ9CPTn0wTl2TZDSaWhdUkHDVhb8vJVJBuqqPnXYH2p4ezaa5EovHnywqC/h1IycWngRdv4nhbmM2YUgYnNZwLV4VLEJ9aoLkQ+nof0eRA9S458kS4eaxb5UpS7iaPJRi4hrS6NbpRQdTxh6p3ytcaeyYJ2i1oMoD+GwMclk7O0YDrWfwzNsNc40sb4K0tzfopvmFpXR+2cXyAfRQVL/KP+axmO+jIQ8Rvt0s+PhYUHz05LJ0SHx56Mp6cg9nGXf22tD7er023/XwtHt/FIjLtP1ufR/pv1/i5YbkFR+aVkOJbIPgfSO21+sdVepPa16NAxLmynbqZVw27cTLkMGnYara8Kib1TeVyKhv4LTUAOAW3MjhUPO/57E6wO5gdPPbnKBXpFjtJS9WiMC2pmRd0haTiuLMWYhNwi3neT54qeNd8Oe0BAScZXLltAGrCeBS4XMggqT96DZYYM39F6X1LJTcoHx+3aM/sNTBh4DCw4AifUhWuUnS+w8gQX4AzD9IcKlP0eeXE6GObbdh/XmRB6cQV/eIYx9zqyacD8iu5/W+prpgGpd0MJtL6aTsJxiV9LGQlj78Z/Cs8/l7NPX/NZOjqi8H0dDQyWbniPXmfyx3ijCkZfrrskWq87cCj5clmyjAd0WBs0ETUMQ5vdjGNpfsAW2MSY1DJrERc3c3N/Sk/o/UIaaTWzFHhmFFyW4/5Z+NmsRvnikR+Tt4FRVMfU4YNVtL4rNQFmraQMISGPLT3U2Qhg9WPPl6iWnuhblY1YEas6yIrVe4KaMUj1yJM1M1ctA1txIRBdPKEbrrDRy9Q2Iw5UEZ50s/pafZtAfQElwhKulIWYTDyBO7Q3j5CKxpDGV3PrMCe4IqTeTllBGoLg9Vgw46g25tNS5nhgWA3IcG80cucmPc9h+QxP99xpEycT7PlJ45WKsC73H2hCBSDaQEUaPtTImrEdE8DktjFDsZT9xbiS+6EgPROspdNuTSurR758yyzNHaTeYeqTibKmlYNA7x8eySmtiMvo/GFCbJ2vjIBS7uJye2TMkdk7tOYFxsnwNKT0zu9mJ5YXNOgyIMBi2vP0jVxVs7w4JzZ6UYrI+lBgW5Zk8NR0S77KVBpGjJ5E+08SSdvtk7eYY5wwK780clkfbuxWR2NbVXn2gtxNvjjjZ3JOddbEQ6pMwv17W1Ckn27uvGUtY0xrbeGkSjrsuw4IaFpiWBitUeJwozde0Tf8Quvz/k11Nb9dFV3yYXCOSxUcsKxXIJhpXi7LmbaU/6+SiqkClIey1IKqfiMunDY/i0/eVIpdcrup3wFkZYtdqlSVEdkt2nYfp3aO9f61Ag8NYIFdBAnbqm5sGG1pQd1e9KZ/bToiSMNa+4SGuew2P6RP0FSm8AkfsKkCzfQSujsIsk6Iwfucn1TV5DCXuMKiXvH3DhWNI2QbmcofTbboBo0kTsR0VaBjqPl9+nOoOKYtTplK3SzMWYUgZE0SwYlUFCz86OMCnLpo3VVhx3s1SY81Qcuu0yUGENig89tYjthetwZQrnmKilydsvPMNWjHcqJ7O5ejIXJxuN3rsk1ZC7nTXeZjTTdUS1QSu7p+EKWHWdaYpBKR7pj8hhR2W0NPcSI6n4O1zXMoae02QXWB4lZRfRXcKup00rI19QtP8/LKM24E6wp08rKua22Fn+pea6UVX/Zr6XV7ITkVsTZW5/wi5tUIKpYZy+7bTOne5CPjkqdyL6UaHK7dJWAcxdLui05ZatwfzfdZX6DxmGxV1VSzkrzuHQtAjT166Jq7F6KuNr3e9mittctEgl+KOaOdyRseyEbUuni4rBsmaFuxbGMCfiKUhAFM2Ocevfu5Cd+lw02RnkSIPkUqwivVavcJEUYA85ARVrMrJ6JJrFAK4LpQPO9hXC4LzThBNo53uJCsYTmuM35dodnZKClpbjzhIQqpvat99VVBvFCXpJ4im/eVmvUHdY2MAybu2sPc+PdAWN0arW7+2KO9R6aRxzD5+yECpZOqIq7cRdPUs09WRyraSe0hgzjuKV8qrw7T2Z+zkvkLHA2QA8pHspTY1yowcy8rl22ROegKHewLqjpLWYHxKU/OmdK9BPvekRUEi7MwfSrfF8qgUXM51iIJyPXwNwle37KkI4EjlOnKroTCeNEL/za7pU0ybzLrpnDK9pqM5nKF5qH2a1SlxK7l/AjpvGAQVJv0W6Q+e0AorJ0dxYTyoGzj1wMfogkX9iQV40+DN1+hXMHIly4eFfmCgHbvTJeYATZPt+3CVt5Yy6qjp4yXIfkDf421huiAvFcPHXYPIUcZ5vcCNiFJiIan8pSYItBKbi3aVR0Lb4e9fFVt1ay4Rokm5NTus5lIpZc1PZnZ8owlA66t5GAUP5UXhzghZC2bKeRsEgH8j5HXx29YEeeHdErYDNIRte0pdkbF+1AVK6smEw0LjHSPVmGkqqrjRCLyRSRQ3X8yLg3R7VsBRBWNiXQ+TAJE5xy0KsA1eStt4MtlWHtV2iQOVds2FygA6BmnvxudHs9BJqgv4fbccGdI/DRQ6YfHTf1pDxTdllXHQPhRfxcCebhDI4w9sNm9oiBFKeV3+j2QtiyC8zrcAv75nDePWrpSD9RBx5JymHs7+W1UuWli3tlCYqrWegtQo+GMJgSq5/A3UxB3V6SNH88dxNB+86ME+D/0m03l/tAx2BJxI6EJZECeTgjnbdf1CxeJhh7IqJBGhQBM7FnN0IS25SvYbs9iSBO3BuFm/ezEORCYx3tolDuN1EhyRHpcHqK7my1a2TxRF+Qe14e5G08TkGYf8wLcEfIwUxK3I/uLJ93hwPAAEnyeua29NQyGF7cHrYwuQ1c0LdDO6e3ANu820nfZt1ZilyL20sYy+XWG+sJSO2JFGy52WrZMcndHXNHS6e4oHQ4te1WpVX5rA3U4GhRYlVO0A5W0e726XHSwe1NPgNjJteUOmtsylY2+WbJTHY5UMhjFa6Rpp30nGTNzh2crJmMu1VQ7wr4laNcpehF94qsNHQUHybbO7h6xAmjZqrF3rkD9BAAjtWJoWURSr3hRrx/eKJR9xExuLdLPpu74UBcyBildjcaN10bNNhy+2jpW+YC8Mt1skAHQvq0W+qmGw4vfkuk6p6CkWXabM9c3+/FcoWG2lZyoflYD5HFVs5JwZDkgYkJauHHmoA4F5Gz1vLYSbxU/aQtsLubeiDBvbWPNaNhd0Hrcs/v+JPjCL6eWeGZikSENe+ubgxbAEILmtiIRgbsgAERsQQd6cfR1Dc1dXDz06yE5R4B50I3ScngjcnbqAaZ/DJtbMryk710OYlUyvsSTJmT3IzmAnme+/v2Qox7qqscf1+O0v648o62TL7OIthNPE3Ae+wXJ2k09mRt/XRfEOWE8QgigYgz4dVcaaYumMFZcpND2B2WPjnbamVF12uHX0HkIh/LI2KSq5puOk3p7rrBqUPRMa9t81ZTTcr9+XH11Ed1ftlzANm6kr+dueZW6SaYDp0G2Ikm1sW/i2t81K+D/Qi2ppXGhQUD9LzPSvt+F1PonpGdvbEpcrfW7QJuZOb9BAIfWVMaRDba0YV5ALCGX0khQDQqJ0ruuiHZwfDTyprYVEsKNd0zjFogWJJL2F54zPabOir9hzrKzwQtdomWQhqaQxvVJJ/aLrF3eXqzIrPRanlscZ5uF0YwoXV7a506F9Ep53XP6q530blBrMPyo/qSKf1RcT62YOh8gXExZ4uTpXsUUfOwlgIJr9wc1ju5sK5td8ptzMirRXebiFZNrIl8Myj2p3oDxBJAueENK+5L2u1m8eQlF4tSA3JHuBNeHxNdQIc+Odn5FT0u0DC65l4WKRbCEkvlkd2Ubpkzh9bx8mAtMuGf0VMdHz7xC8bLuR3r9JZ32MHKTTjItQq9Z7b2mHoG2Jnm8iVYsdaTQIs76ElMgmDRwvO9Nx5Kg20zhXC4QY4x/MEKToCf1JPnIeLJfVOQ6WYxZlrdW/raHjsCm1YO1vAAxetGBRDfsLIA1gYzgJ7tgfpqWyy8XPHoAVvAp85s4Vx3ovLIBgbCfLEIGRl0rNgJgNyPo85a7d1oM+SEs1sUgvqejkKcOixI7zkA0jfLYm08Vd9uTx0i28Uezj0QMEQKl5SiMltUD0EAzx3N2Q3oQw/YwsmyLlfJW43OSyu4zHJcd9w108zd/eCWp5XvDyeHVr/25PkxX7sTXE7Q8VIuLUHVkYbLBk5ygnsp9Mdz1W7z1g281NKiUrJcuGIubrS9dGr1Z5w0hV3CmPfhzb4At33Zgd1dNnnsecDTOmIK1nwpR8+6RwB5j8vdYYtwYi4qrm9Ym8GxiOSVvdQfA3zxRFWaKW6+rU335hGt3zw84kr0erocIZb5j9x/KAM/tlbfvgiYc4LtrUlazPI2utKhNbJ2rTBg3gDJdEs/XZvUFHdc0bC2d39lK1L7cJtMYEtwNljeOOZgFy+fYeyQsFCFwEaJHBwNl7Mul4INT32oHi2z0x1zr84xZjB7ihLn/YnWx/MbOpj1+MiTUK6mJp1UA2RSpaTVzTAs/XtK5eG6vfTOtVDEYtAUmglXCWi8m+gvKikkpdz5Swg4uNto1pCob9FTfuRajKMSMzpv8Vwvb4Yyu52T+VINbQbZe+/YaCbGfi0WdzpYgwM6TLnW8/v+7qHcKbjXt73CsDftjUZR0LgwAQ4HFGrUDzgaBDPI0Rq005mhtGI0UblEE0/8PtNdxz170g6LtU/ju5mnR8Q8nCKV7FS5O0lqCsc6h2tWhsRg3FI7GcUu16565MFOPX6luTmYrkbknYPcFBPaFwI/nRrfdNQjqxXPMTZzWpAfeaUN0x4XcQuJ/6pzNFAgtXJrCnfd+tArmcq9+iAmZi1WaPbM3BvAXAGarRgZeW1H7o9fodmej2F0BojQqXuI+mYyTgmvl+tm1zCjVgz7O02ldNfUpdHfkYpgMNqPDKdUzss2eag641yKd22z+Qwj2eJ51dMkfoFaMOx/wi0kmQo8Ok5Rna3n+uFDpZz7ENzN3No52jbXTfelrRtl8FlvM8Caz2faZnj+Q+nN3ankfInWBwVnOFWdLIVk9POTwMO1mf1QEH3btqV4XiiS2z+ei5GmR7XoPMNEZsk36AFWdJHLhZlKBqnwMwHEn6yJPr8vBU6c69PiWU6KRpe+3Xbdnh6mYo8KwukY3u/SdNkgYwtzuumygZSFYmcNhHi5YuwcVgfIz1qSw9HQqT6x4TKb9zEHBZfTSOQlIU7SCYiSt30czNPIa0eU+7PIwac+8lsYRcndXPxTELC4MO+B7cAnFvavfG4aG8YIWgJgHNreDR8x2+3K62M9U7Ko53X1gwJnxOOhgtmc98wss6N5osV+cYyasRui69jiNvUH8Ro4AJN6H2N6SnTY1bnssA2q4tdLL4VUu8X3KxsWKfFUnBKuWRPddrQ4Aag/7DMHEYiDKEmdfO8jRRCgQD1WDamfd/F+IwPIuj3q1dV1t6pupgJxkT/X+qIVvo1egE/QwRUwySLtB8vJcA3HAJSD2IaQcgrLd2m+8bEZeJAL5hjf9DDHourd2o7q5nbUs7Nf+oAs7FLdsRPkfvCUtH62t2HIuyUE50stJ6I20zSNXQrXOpVuJWeP+mtnqdsDUaqZcQyFOXeXkA6GdfTtggyjXumla/lJmQ0+zLi6nu691UcXsA6Fcgkp5G6XQjaJIT6fNpfKrppZVSU4ZZc4q2TiekQD/lFC0LeYt4zJgEgUm5C1fA84qoH20aqM0ylMaZrqfD8RIlPyx2zTDmbXD6O7i3aGXdXZyB7sdsWOi2WjQYxACAoEj9ebSDldSJ2HwZCYZma/PRYJEypCZJin04yEtzmGSfX5mgEKygg9L/cpe+v3iZDOLhpKlL3g9hmKiasIF+Rn1Ya25NVqfhQklKT8NgPSjJMZCUvugScDfLPdE+WBcGhhnTTToj/gxSx13rLrVecG9IeYy4gDojgf+euQPjP7Gm72aZToNj3+ASsdewWbeBYEhPPqg9ni/NPOl/LF1eW0rreO4XXY6WgJzeGYq9vQsAdmRq/WNdkH0gXN3e3jwRvJg2B/SLMWACOMMkQKc1ynoY6BFpKIyCWg+tA1OejrFeisxvK2bDJHztnce9yg704P+7m4i7JT07hCg+GVc4ICjHOlq4NFUYNoJX47iWivWBResiBqy8Ci8wSApgpmObphxf1WNg0gevUVAYETNis71t4Z7WEV3RRtmMFWTLoIIIPR1r3LlbqYVIY53HLo5DMLXdfoDXLcRQJBJkwX10CjY4IQLbTq00OYjzbOK8gyEtji05gHU7vbNpJgqu4tepu5wg8tbHwvZlzG8WbkTwqO43nfAuVrNghdTHE3zuJx8YPH4i6O21EW8HeSVhTlvEv2G+3uXWDp/GMdXgXIdZzCETC3Ee89THVOdeTBu4G9tcIRl91TlgB9cukfmrS+pX7vWIUMufjKAuXRtVRW9Z5Fb48j4qDNmef1s4PEnnDBYhyt2O6MvOQlGYf8Ep58LrXnYOnLSgNchLS9FiPFeR5MtMNRVgjynQuB7HAr7jCdVm6mMquzxWxCrTvaVKcd1iMbhilMXtdwnanyyEdqybKKDC5WOaJ1nqkjKsIUKFceYJU0BXSaXKsy4wrqU7uc3rRLWg3rkyn3cP10JGttYEa3A1FsxBekcsXZ8Sdm108FT9+CnhfNt/Ug5fhcDwrTHS9yznVqt/VOIVaVEG9z4S4BfTUKBukPZWDczih9LkcSZYBB7yNCXdQmGSq5u0yu6MCkCRquLZM/Yyi5mR7schfmvLeDblvxQbLVwVTsFZO0XMU/Xdst0spKmhbt4Sbza9STjPaqGhHXUo4Zta4fOQOTqBM1Mzcfcwa72D4cAMM8SY2zzOUOVc91k553qt3jc6vgvXzr4lGx5D46V5Qc4xbMt7XKRo1Db6uJ5wc6H2RLOIpyDYTwk2cPS/ng75Svqo546sStlk10jKmkBFP3iWdJ16ryBqD241pHr9x03zv2mMaxnq5HjOJS0NB9q8N0OIKd5GfK7hxfhLiC18+IM8ygGHq/+AaBDqek55MgUsoqb4H+hC086ExfF7hP9KRGh86RZ5TLfEgfbIU+56I4c1H7Sem/XRmG98Jbd7OSlsLtEk9hRkTbib7C3MHSAon4zKGgeKldyWQVNzDbWdpFPgzLRHGcVszPSzIn8bkmGOsb5xTnwGZOhpaJ/CPheEFpb1ZqwC3nHi9GOYuNK80nxzWCmZ6yi2Jm+8zAq2tpdb6adXJ8wV8r2NUWeWSTTptbTCncRmmQ20pdTQKA1AO/EZu+K+NWjckr1EVKOWQYM1iEAXMBSGJ4KF0oJjGBO/B3in3xHKNGAcl5cEz7sI0SZ70crXPOd0rPuByO7ATsk9KIZoixY5+EcZdPIsdEk7Pmlujk82lj74ANIy8bhjkv5D2NfIJKDM3FPbD8jnLqUPIDkbgnVO0tZ6KTTcgXiwNO38dk8ZYrXGzKy/BCixxxRQDzln2gQ+5xkFPSRz2Gk2bC2LG1cXrW9jPXTWJRaXwlZ6qcQmXdQgp5awuz7xHDETM3iHepkMdjHoODaKfQ6pRzXLioT7I+2uMVhu3hlJWyf5n2S2ndb3q6NUyaHBiJ6muTAfE/XA7QUtlwGFnllQKN4xSzN9lGPevn/pEBTdWk2R7LSa2CTpIMaR4EJGTTeSTaLs7gysPpfOntzYwyIaGrI49t5pKMkX2jNDAaz328ja4Y9dKG0dtK9tDNqdrER4kp4Vjv4AOGmsQgHLBXOI779TGOOiLdu7w/pCF+lyHpiTax3FnPdYhTtVfPjtYDLx6eXiwvl9cqN8l1cywKp/1xUbbDjlZnvb5ZExwjFzteC5K7XTt6QIoeK/poXt2zBICNDrnEMk+Il3irTQ/V9LYiLzxX5BMEaL8jdOPAENWW3BHemawOa+OlZ3q/hxlpgP09YLT6C8uSiVhpRrGcUkaOM5gLkQLs8LQ3YXikG+SV2Y8uKPRlwDt8965lO3Tv3pttAAs6hDtgm1usl71hyhSbbMKzravxcNhadM2ZIY40S6UhtG27uSen43n2n758fK3vV7Ilqvwe46G2cGeoLa4vbfFYj4H6EOWOxH0bzyosDRm7uJoTb89Ij1L99IjuZjmWTfOIjeQYy/bxyLPbzR0nLAatWbN7il6LeeRIwSgZsH8+K+UgqiDhKe7NJPRY9ESd2PsVBQTw2ms7uqCls8WO6qnccRwh0/TM2dVQC7MMBAHMvEGZZK2lSoebFz5Rx8OzT4MMq5yAsyOkBvCRuExPrQEmcBES0EynUk9eoW4C4A8ob6FPDnAq0BWG3qvmBUe0D6TSH7T6ptpACLXFc8b0tdwTsCgmaHgDxtS5wv0jU192Vis6mkwaz6WSu/6slpCa63Hl91I5ZSfe2OtKSuQkZe9HkzziQxaPSzZW+E3NcBRw+Jl2KPxqWtpVc1F2hfeMD507sBdc10R6hs8OClMZSVNFHsEewCCE6Py07dvLths4SUtPZjO/Xkzx1B6boCdj3bVcfNwEq1nri4cwj8cQPSOr+IjyNSUkbyLZ1u2C3f0OhT3tvTG5owGzUiElSYnUBYm9PozWUTkbea1mKa/qlOLIb/njihziirDrTt0MeOBlTMA7VFkbdE0oTE7ApRaH0tmsp7ydDvN2Qn869+HW6NRpuymnSuNyaPp9y9o24hHb6OQgxAvJjy8kTxsmn/i9wipHk7YYPuyDgTJOBOCyuNR6LO6pCX6HxaWe1eAhArFZGop0xawTIumnx3NaNYigqm7gtv8IFGbVvlXEVJtLCe1lO9yC4Gq2UTVoAkVbI1qdMiqy0X4c76VW3H2lBKBBFQ0t1Rl/H11ULCrcVVFK2d5QW7CfWevpVXsjNeR5Kjf4MMHqERfMPB31vQ21cXE/Bpj8PFdM1uFWDh4lUuPdVmDs7EfNjhwrih7lIld2PAvSeb+5AlU5QPW4bDaYox4oWMPZSdnsQLux9y/vBp+vhQp9bjdLKU7ZXuhsLa0PgctCNmJU7ShvEMrqG79XzuSdT/msHbiocc2gd1grTKtTMj1YkDQtTifSpzq/dFZrQsoOH5Pn8tMjY0bqR3XfybUMJLa8HXpqZrFSGMIGslXwz06QJnIHh79pbpVrXgNld+Jm3j9aUdDChMAVRUU+y6GZaeH+COMpXkqbRHrOvF85zlGGKaIEZgyF7SpibRX5npXMVZ90BJkeaewI2Pfd4PYmulqIH1Tn3m20JBJRbexVWOUTrdd093hijsr9Z44IsFE0O+iJdeFtERcJXKazZpBh9anOymJ1Rnow3Z51L/p71WZm3MlAiK1tE9nUOLob+HCRsJ3v83YHR0p6Q+z9iG2B3cWwzQov9CZ6Sma9oEdeTsaw1pkLb7Y3WyR8/oYQYluHrFG4pkajLEJblhkiMbkZZW17ePMEmFmx3mqzFkKoVLk7wDzNMTfoAlhivbEaajTXtq5uLH+C9TvXgN2ipwLBN9hU1mzbjRJKeXD/5rQreaExrIc6ClPsY9UXTsA2zw5cCbnL1R45EDhRJkPgkhQ20PKxZUuZAeqyab0yTUf6Cu5dJtFl5RFZRlEMBZIKVvMJ6chPAl9GvnMPGqaWzDkPquEgNnQbWN4TV6ssqVq81df74lwelaqlY4vwqYk7w6orWwUVzLip1Q74NhWl6omy/VdEeeje/YbjjMbdAmm+43CLvTh+dsOGC+5ZxrXks9qxZAKVjLwmrYDDKQ4GESORmqGEqQZdrWzZgXUK29cbgs55tu7U+NG6LQOjutByOO25Q0aEojuE8uPxkGAb+foVPZoil/mAibKVtF5gYtG/bAhZ169eLfA+g4aPlyaMODrzBY3aGuZ2Fkyqj7TuyaZTmsRo2JDZwj47aER5eENd6tPx+LGX/Db+iEzhKW/2Z2JQ0rWSr/tY6eH/7w63T6VWeGbAnr8st3/y6rdfqZbfcPT5e2CTV570rY7AHg6IiKwXNLBml2HCpRuVm4X2shkTpZw6InCtHQ0aeQi2b6eRu4W0koaxnQO456EytQo+R0g884f44w0QZ9Nc4vqgp5PgeddbaVOsXvh+lxfwKUD2miYfWzVihxnBezpZD37jna9CsyMFy3iw7BT4qmAmlXW9wkp0OtAzQE5b+K6PxzsaRLcFaMRxm8hGWPgMfwOLXAWrvY7i6+0wnWVI/hAGwSovpJquV+8xdq9aFRg65MG7yQuFhBeSJmHyk4Nv6NjsVmuj1XeWUY6ER/TAUhi6OYEBnQGHeuU9yxQaCyLzR6TQzPMF1Vz78YYY+AYMSKAqI8IVmMFY3956U+66YbidpG1GmmfS6ZxISmHNw/Ft5ja3Gm1avtBhLu/s2yVLr6Y94lunLgD/5y/NrqvbW7v6ziVFks6RarbaeTwRbMjBK4lyv+XjQo06zNOleSbpTBGyGVfa1guwI5Gz07pR7S1hy4UBTE84XR6SYvFbpjuKNofQTkUkn9fTnnFzvdBsv7V88ixtrHYKVXEvmk4oKmcBk5W1O06lA9/YAE/BrkXjYKfcFbkyD0JSGtAwv16vyePtTPJaU+UIfiC/GV9vT6BN+Tmiws2U4AO2eedHETnheJVm1tS9vebjRJ1Hfy1u9wCfRSxq7du2bjfdZrKmPFKhVzjsoduLjdkRUyM0SnTXDkBIUGPG9v0IVzTi+62U28PSZf8/Y9/V5bhyNPlr9A5vHuFBwpIE4d7gDQEQ3v36rUL3zJV2tbvfHOke6cztJguoyozIjIwS4hP2HXkuUpYvS8+u7zPuMuM2zz/TwuibYFH5EuTYP72LCpkXBHXT15TL81HD91PQ9pMLuMYCy9LCxDGz9zECbrhW6AMgQwCJLUpzrfaaIqV5sDQ6y/M0vUzBmIQMVn8jOV/eElifuwtBy0xv3yeFx8fgZBk9HwTyRyEBTrgQmbCyevseeJtSJZYTuERzjyJqpB0irsaT6vPjb9iJXrujyRJ0cg7WfiLIbDtYnK5ww4KvcNI7k6uwgcGntM1Gk/c29de2+n43zw6b/FWMIdUMPoSAx+I19R7Urb3Lrx1jZxLdl6NPiObuI/VtUhWOSI3oGTkrld6GD/c7zqCLsKKNA86aHLG5MRyEiglunAWILri7/HVte8EkVMfa0ASrSf/uf3dOrYfYFrxQ3RtUY80HRPdmU+XuUVb2s/Ef4wh3s+c2l3+FBLi8A2dlvgpfPCvlG/2jmIbtIK9lagyc9N1C48hOnDnmOPDpVdLSr2+1Q4jDHc+xuPYPzLXfd1Pj5TVLUQS/ayoBwgxOnUxumyhFhiLJQ1R9e+3VfvoICjdmXPqzZbVXzuhVr7m9CnfsYC40Eq+h2X4VJLWWW7t/lH4ctEpPh/dcAvaGmQXDQHcQcsJqEPaDVd8YAYTTA2qZbfjOLDgHxscDpP5yu5fzSmtN9I8aldBcZBIbd4L4tz5x0R0GcbsXUxE8rEyqvrO13AX+aG/MKN/hnhg+4LO95+euRrQa2vfdqJ63/UMm1cP3/ceTU9A5BsTPXMPb569a/N49kBkVMo+MUZEkHxnhzXBr76t3iq9bS7+/1R2lDOslSvvuabDyJft9LC7WZU8jTouvY/mAap+O1s1oTMtpbonR+fP0LGdiZAFPd7Ox3gt4ipuecRwIPtSzVbp8CwNEYIwNPowQPPLPtpSO1763tvouF3e75j4mB0VR9uJrDvtu/u7twU7OMhrmDiwBxLZ2GnFOEYLx/TLs8E80qSUcgafJKHlUi079TLAGvpXvAFj6Whccqb0+ZcFm9QP2kT73IzljswA7w9Aj4k83vIZ9Xt9ZW2Wm6oAVaKtvvtwoKM9rX7LFUN9xJniFdjoXr9vxVd/9qK5FAZ4p9BXQXuDzarj/OvpgbBG6BolxDM+j+ePBAI1ECCKI46Xqvc9fZyTLM2xeDZ1GN7yd2SZGrRPuEZhO9W5/ENuTxMCaHWp5AyRUYVI6i9nLUsWCs32UhW2pVT+7397gPoUK3IsCV9yo2+ff9hx121lXSVNCOzoz6yn5c98EJXjdmVPvpF5mZebO7YUS3ap3vSKRt8W/ckEQjqYQFvTrOfbczeA5YrHrKjgMEk45LdblzYgAFKSw87Fan8rVPX0nJkiSdOlXh/6JbQ7LqiKYLbpRWKKiY+pVcFPhyZKTExjiWuvGfyM6M7Tn+0Ghg4uZ0zX83VNZy8DpO1w9z/MhSGXwg4focV3tfrdCgnljTxCK3UG6nvae/9H+SrA6gaBP72nL12i+7d1pUjwCqLfvI0grPl/xKz2Onyco0rDwkD1dYqLJTwDwWNa5S/069J+3C88FS3dpAJ0e5FQXiKW+suEO3jbm1idNVdK0rrETV/vjLyhToGBcdd54qUz92YCgvNrdzS3MGKl2gBOSl9XejnYBu6hD5YbRqK/GFbDPpTjNQapqvDGA4V+zxcyPgWi8/jOP4nxAlEVMV4HRp6NvZszqg8W9OAx2vW/H7eNLJIZx13jq7XcvUfT3jU5YaBD55bOwG44ESxahC4J1ZXRu7w7YXyzW0O6bjpKJ3JBhxo/bQiHhz++zNN0Zq+exfTy4ysBkfU25HfdufDcl/uOn051h+2t344VKzZe8h05xBDIbdDAQfTg5nECQsVS17Xrdn6wjKkUWrfnnPcKjlHqDNo1o/EBS4/YK2Zun3A2Al8QXB/YXtb7AG3lIHAdCKqZoPx8HWdr4d56VzPxGlwekAsnoffnYUa3T49Eb9/4oFG+c9VwiKhwoXNcSmHG6JboF3N2Wa6LcGaKxFmLkWu4BklxRISNCidmsvRCAeQMczVsK9VaIXFm432yWpnE7GsYBhKUY+pEsEdGXf2b+AbtDnmP86prFYwFyaVjfQFFOLsUvUWXLV/i+wrQSYp1cpBsAbfQzSu4YRWE4nGyEHf1ei0Ba014P8A8B7gd44u+HqfyJMcqTAzEaErl9BlF63kmNkep94QhBrn+ijNCtg8odDg+23wu+U4/ys9/R0WugFQO4r8QvPJKf2yPJGeN3FrG0OOjl+OktZ33IWdLdji2Ro102f7fmi9oXc/jioxqSTrNKdE08bpx4t0JGVDvpqgHJibx/cTT5BvUNS83eE9PmKF+tc++gMic+GAAOmVU8L1sp+8fL8nfiXtmlxNgWu2tbGvy3nalZ4f7qpFX2Gw1DDRuqHOEqVwNkjKSIk5ZHHdX3ZpdYsF2nm3vBj4aXSOscmbUu4aq+g1W3DkqbBQd7Nb/2rpnTwQexRrH0VzE8JLHIh6/ZR0b0hIQWLPrLCbctfL5Pyg1etv0EP+Nc/guqHAyXvo3yN3qcAXE6GYvnL+MUgBTvAF6W0JXtNK+JQSEbJPbPQac4eMbi5xvntak/mkaDbxYenOf4iDHA/ipkT15KzTwNdQ+/sNfFCs2k0HJV+gz7645rEmFJ2DJhcId4pSfi+UWiJ2ABimIrtaUkCUOQpPPJ8rUsYf+6VBQk32G5nLhJ6mHUt8DuDjI+GWZaCF+gliULIn3sPlIhXu/JvNxlVDXyym9gje8hPz9XPZiPxShP9T/AvUbUiWq9Cio0cqiVOHA9/Y4vBISWLXes1+dFVOFScD8RGuN+z92MaNFO0zFIQfY4DHS37+XFW645WQcaxYJVToouC19NLJDm9rLAZ0mu/IxPihomiX+45Ex6XrPq1aWRPljTtINTCxcLpGuyIJLXXSyLs7xckxJJVY9Hcw98iFrWCL9cSW6AMQ61SEOPVlX1u+0vwe/TCXF3kNvXD+2ii0dg5JdnbgBGfM7IfT+NhRdOH+W6cOeeSlhRshmktlo9oP823GlqsMrbBJAegPiquVMUBc5g5u/LlYXqujnJ/jzYFsTr+IXNq36xauoFx0nJ6TqsEND5gH5bTkHDPApdrbjPHZrEDa7GdMZpPCezBw+50DnArJJN/9cfFRJsGCs1ZMY54Uf/TCHW9CdxrTM78QZa8/NdCmdPrBS3g0N9VT0K/emkCEC1XK7Am1pngMw2XSyodh4Dr9CFzeC+GtbMQ0/B6BV5OwjRa7dq8chZJWHdpSM94FwvgQNAkAk8vxdUKlRgoSCqfUKqJhajVhycnpE0rpgVSpfknv0S058+YErXdiyGJwi/eDIxAKSdGKRJ/Ah1KmtF8qEkvv6ZR91FlCSRJsrmNgpTz3Em/xaNbMlx5eTlhayE/K+NlXSbWY1DuY1SxbXCEHiASh1WfeR7qOvUUkSw8T+UepDiCJWal6/lC0SWvaVw+B1Ejvs1HHkEhjJ5eDw3dmbXH7INAJaYL//lyPP9Tw9yBgETc6eqM+LEy2qQHyov8zY0J193R+n78XSrvjYvJH8UlaHUl7zGMSqi2wFir7gvNtu7mP6tHVBB7+2HqpnjfiezGE/iN8JZgOh+TAQrbpATCHdHNPbdx3QfhziwmmbdQ0TZrO6pbiyl//45jYZkL9bfTIvXFBVlJNioNKZfvpx//wr856Dr/YvWWHxvnHuxEeELx2m55gnMrKNX0/st7kWoB90/sdjsajyNQ+/dIKkKUHiovBs0zdMTA2Ts+8EI/096A+m5HNdSPzrLI6oy4XYC5wRV3EuQk0o+TqpIeSUw8EG87d5zwepXc/9xleWZMsfoGdWh9uzXxwQmlwtRM8YDau848KTrs6HDJYbZsht/JYfaB165zQuJAJ03bxeTJh5EYKnJMI40qcUeMrvD4kIXciYTRPG8I5QsRtg83g5mqk3Hc8lk8Q0Mj+a7ufydKwvpMh3D15ojztHdAVpvJ4ETxVCQGdruJEeeb4jGf1SQMm8zlJqkQjvBLTX0PVng8Q6duRiFuH3HsYL+v7h2fIbXhwTJhSsIMsM6kFsF6DK4WPCCZoH05R1B3n729kFA9ZM0ZVnWRKjuA8L7wpyxl0dludMV9LbqKaUsUqWG29yeY5yJt5OjcOFPuRNwZXiHE1+sAImi1k49saR23px308UqJA/LlR19MiLVUmvhc6XeMhdhiVKauDICfEO5nFQIQsXPjcvt9swsz8OI0qoHKqTW8eEOp/N5Wg31v0+Wk1qlfPqfGgKb0DYTlynHOfK9lM74pZYBQ8uGsvTTQnsoNC+RkVl/7dBnTxD0/g0oI8DGEopR3+UXsWI09KxNAEhuFY9imHd05brLOnQ9L/O1ZbD/YvFGtxWiq8dtOjrNQzp+g89pVjbiwaYPYqce0x2W5yi9COHgfnpjFmiiyTv3D14NUZYRy28F4arAMTTYXcQtzpOkWE9yQujokfp/a24xljix+47VwWcKdEWa0OcCjrzdXgA7Ji/bf/P5GAbXXqnvW2fKgQsRkyUIAlnQ6UECEiUp8JPos4NWM3H261tM0au3mtWvw/xlL+a5OqUyAHi4Ck4renbKgcAdliz1KCuTdqEbbmOvL4qEXfj1NDSK3K07kViqp+drhwajCwX/OgEzrRcDUPKn73MpNs4aZBY6OtDGilhnxrM6xddgsABDFZhPbD3fhj1a2Ay1uKgJeVJIkmTZ+SPaqV9EUBKnypTgyTkSYjK5uPt+92kMLLdo7UP9nWDh5212no8lc8Wjs7Ov4T65r3hXPuKnvC03Z1xehVGW9kACCucUJljG6q/QUDUwjL2d8OTt+ztj2+rOPLbcbk7z/odpq8PeLMNvhfiReT6s39yH+OFw5xndKP0B4hDG8KrJVbC+wyvPAH9H6AxvfELBR0RmuZMUCIK3l3RhQVi7XvQP8W+f8W1E3KVNVIGqtNod0vYtqgUvlB/mc++Qcl4ehiihqAxQSON8TbjP6e5ymVOhSzmn0wzzmb3RHWTImI3yLryk5v54QyQPS+WXvVICTgVPrN6fzSaObdwubbROHVgf4zQ2qRacMZHebcoS54MTbhEr95/n9qAQrAkakIuIFf9BqfA7QMdrqeQfDxlAme6XUxyvLGdsXdc3e9t+vcs/1j9+JxPrUn+9FhKAHZbO1hOT6iAa5n+rNYYE8gTGFstPrUa16Cu6kOoTXZ7zBD3sPkdoBrP2eaXk7II/pJV30q5V84P8W5ERC8bk8fQxd0QPTrGbiYAs3W5Kv2tOXvrlF4Jb3pBnQBysW6U+XbgPoYsnrxR5HGawyvR6Q/uFyw0Y/ubXV6J+R0WMMiRUBX3H+CmAg+QP+f7mCa5spSq3H0wwBPNdrW+KJI4VsdxY2U2FZYZeuMHqf1Ho7WaKbzRwtFARwfkRMwb2RG9ngM1DCkH0O/J0+cf7NXcx5Q+vVXwJc+XLMdxFBnMkTQ9HGu4ec4Z2a/33+EaFdROIdYf1NJBXUhGmFnXaygC/v+PM2TJYt8PicFZIsFnLJgbkRQ5Hcz9enp9Yl9+hHBp9ihsGtyd6mbRXD7Ev5Ofr/bkn/v1Yj50Lb+KGXKbY8idkZTQjzafmG/udSi7PQEKSGf9e3wvK+qvCXGYWQEX8HUZno43z1Rva54f0kERMaO2X/LxtrEoa6GYk7qvL+4hNc2hxd3U77qKMLG93/jgf2aC/VX0zil2qnzVf7VyxrPnzhXeATMThhEU6eBcgJ+DN15Zf3A1jZDT/jFHURTSLnozBt2JHUyYHPpIG6B3j9bN6Egn3DfJXvsEs/olpkcBl500LEaRpKIydNDOVHoeulQM7M/uNSoohoqpvhdQsChtSVxYAEWOC6bmJVvA/mCy3KIJ8VLc2u06q1YIPwlS7CS377iFNH1Mz2gmPU2zqIw775J4jjGPGK/i7Y28Yw83OliF7Nngc2n733MJhiCI1Hm8946YfVVs1LiGlXRKHzrZT02sni8PAYz3uBwFOEED0/TsHMbcwObjXG/Qi0plYBNC2tCzQqHRUsXNN6EdGpwMKZ196Mz+XO/7+4RHwPTs5qWHlMKz5Mt1CNYDDnmHhkKh46NxdEaIzPQcBrhL75DPjwW7WIDTB05tn5enF+CBXyLnAM8/d7i+yUmzn40Bhxc04OaUEhFT66QQlCOwSUnA1qDamTKkvs/KADQNGrCqEIj7JdJdbGxOvTgWIKbqLaBQx6BXa7kHhWv1u6Xuifd+ReQNbt7wDFHlV96zOgVKR9N29YG++jrH3mHbNlmTuhxCavI4G+qRXC1bYmfRFY6j4kXYCzsZR6dPbsEcJn/7jkxpBWsgpew+W7xIV8xv8dPx09a/+ZAlNVT+jwW3OLVec+ntQB+PYaD3F9A/IWrOoTrv3HJ7NfajYkJyJyGql+2+f6Z7QEnK/xbCE5vRJ7OgvdHZlOD3zr8sP8wq7n7dchjCyCLm6MbDfeXO+kMgP9DscDEGFm5Ju4pYqem+GzKVoad+bj9lJSZZz1R0vZeQBIn6njGbDWiTb9ONr+PQxJOsL9Z1wiOoUYs0nIUAitKYBTs9OZAXnEFYuBvKaBgS8JqS6cZnbZJiPo7azEG+PDuHaazc8b0fs4E7kPhyQXCrisSru2sXskCjSC3ZdvkzpNrPb3503bPV614zh9ynljQLraSj/Wr7X/TYVPaZWXKUj+P6PWL4D/NoFEH+GD4R/53qejSQzN0uFaLDOR2sgyOEh/Pl+CL/qG22ezsuVffmLwjlOGC3lpt70DL21t1xyjPdJ7e/wUhRHGtL3zomLLyhMznnWT06EQ2fu+x2S5aaMr96thFiE1dinBEAIlFKGPtFf5pFCbIeO1GvC0PbDY0ut0hERrgBrkuYQMdD6e3ctWzsGYc0qA5yWhdDV+guycj9tjd0SCmCScjwYj0ySanmxS7UmF3MHu8Bfre5mdRmIaMgMv6GT9CfOOJOK7AFRnNPr2Up1OCyzV141p/sDfCPI9On01YzGG+JZYrBEnbI7kFd++sx2Fxk4jQB0JZbp1bLa4G0BPiRSbJqSv1garOGrCZAN36t3s+fTU9pt50Z4WsUf4xwHBvztpAYj3BU1c2rFZZkxJepW9JhstgE8SiHgE59jSFGlDcbg0TmCtASARWROYl0TdiFKF72PuJAVwyIuyBioBnNRamNsLlZ3zNSP5JU1NwCT9BSFWo1DzOskdp9zZjrHddOU05r97SEox/skWtu6O9DgdchDNIF4Vewe/Sha8ekUiXj5VtIwZX/BYWvIZHBXW9c0lqy3yRavBJ5QiZUcuQ0rw7GwR88iWuCXqwS4UuGKIUhjnUhcp0azhtaoNeckuA1rEv39Ft0JvX7yRmbeFyrxsKrXB7B+3rO22X/TLKvRIZHAevkE6UO9b0XUmbAQfjboi/9K+42VqBQCkQ4VPcRw5eea9d3pWGdlp/cD58321nZbiLjGCh/WJ65AlPup4OzwW69dt2SAMZIFm9Ehjm0ACEu85RTUL2ODDqnwSF4eq/BFkKwJYHM/8tYzZkoR1g26jK0o+CRfc69V60HXl9iG+8RmjLBfaUmwV9Z97EHddwmeBq+veviczIlWlLZBx+EFskKGLyr/tcnUJDR44F9XIVTvcGRjAf6lAZYWngQlTuaH7fwehk2m9sxym74/nfKrR+9LdJEH7ftUe3e/mn31bsgdc1UZ/3RjoFk8XAwKFykoTtfTVs1/Tkt1vtKVVogHPB/DGTyVkAMAkhi2u4IXy8iAoP6m83wx8kkNkCWpHBQVQahGTj7k8D2AS88O+OQ7m5tvVcoIdHaW8ZdVcO49alGbiN5cf6z36LIalg+Nx+ki2O/gNyHkl2VY/nqj2NVpxIUP1NC8IEe75KNB2r7GAvybsEhXI2XgKA7yqgkE5HVtoWQzNjuxdf71e2UHLCS9ey3oNsbmHBbGpHShM5oqtSeXOHWSmltgLk8tIq74wWbUyd/C8kVSJuCaWPCy2qCvZns4EVrk4eJ0qTkTzTO5xSZRIhIX8gsrx9N98IYVAGEXVdA0bOhsLevw0HT/A5DsM/Rul7GsTvygFFg5D11pbcg1+UI+AcBB7DYPe4mrXnL0I6DiS/PN1sWfdxfCd9diFFWFP5yC1fIkAzjVV53n8w0NDi8zTdjdhCUn4d/KVfJ5eUczvZUNjXnVKABeQJqU8YJLhtS9SXi0PnRSRXoheh+pnjTf2nqGKEOzkjgHn8E7NkpuS59M/UgeG8me7Hecz07M4HOgirBJMDNeDi6wixs4xncsLV4M8bmpGjEik39VkcnvyDaax2yh65F5mY4BR/DM3twrZCCqIBHkdj1+quKAFfNwzZPObV0HaFVOoyN8WDus20y40yNr5HjCNvl69YDTr72UaT8agy1Dp94jbf1LMTp/yAMOIiN+3RmUfhQpJCgZX4fOjpfoFt9L0sf8QnluWwu7Kw5ahThlz99giIrRnzX2nLFEFB8ZjuD8Zm3w9eU7nIGpFFlFx06/KyB3cBvbwd3b5gpBRW0k4peSAA/tEX9umNHj38hNwe6qWhy95qlj5UnZ4a5sizbhrFowFvj1LSyUBpS1t3r/XkgcrdBw+jpytScgNeJN0koHGyL5KIAwnDhy9K5YVsAX8K2Haga4hBLj+RniEWniPkACq5pIvstPtxFFnRAy+cTCSI9cvnhoiTeccee3H1r3GWDZadoyNZDatLyQtfV86z435+8wX+xrvL4yU/gb0GQMwG7P29s0b3ikfShOf6zuE7as5Q98mcznpgxSvC3vfGjXy1gb3oVifkLl+QCRUdzFPqK/CVq/MXO8t+ezh644oeHdJAVazLppKp4344Ep6YtjdzNNQeipFGrdJqqLO0KNsGxN5P0Vlg6Jibb3tiCKA2f1YG0PjSCzrdMGEq78mNAyw53QndISAFrHDLjtE7XBe7X2e0BIyPI4g8yt+j3y6RV72oNe+PQ8O6WmGSz7Yb6rzYguUgicjxYG4n306r4TQgj4JsBCmo2GP9OcUO/N3rbFd9XLpR1qDWRagVXSx0Phy1uuIuwnFF7rmgcRDAl472yfu5C1vfXxPg1uYK5lfL3bu9aJhSK+KIEL9LouQknQQncKARM+IP5gp6vLp94LQ9rbx2J00vNChRF0pZUdDJA4Plq+mlvKsASmCOyc9QH/TjyIgOyuP+46YBw/sRIGC4ZYStgdh7ny/nDpI7zYHv+aOGfjrpOPO9Rj2QAoyxFl/EIlOJa+fqq1URgiJbxzjc8XeO0jC6jH181t/X4nXnIZy92509ffwDozPM+HC/G06obS8mHH+NlC3HB/E8xofXGacW2LxtQguiO87Exabuw9cSjYfGsURldTzuBa4yXcwPtTxVbEqlFj0ieIw4B9+BdP46t8ZTxqTVciETpnlx72406/qwgjuUMABAY+ricC7+vBcYPwhvEqvjadpZwAFz/kohUOqT5bv1U+uvAc3rQF9RzPz83qX1Mf5d3lNAz9g4lNI6Fn1K+ztwdvGwl+ou9S1PqII0hLrO/Pk6rJeCjO+phw8ROm8mOaLRp3+BBivrIG2XSMPqKErbTmTd9UlhwaEyit8Qml7786wO0BJbjpwxsji4Qmf7x63YJ1ZAaqe5JVTYJvbz2VvLyaCbnjRNOf6fX0vlxl7FEz6d3coxBPxNjjLHRcpdBewuOrhWkpBvR2VUxq7QHfrLB9Ndi+bO5S/dwp3H5tU22DP7BDNGN0Mo683afn3wpInAg/mJcGtBHR6VsyV18qQ3g/uxunTfP2sDvLNAJ8DlCoatxV3FjUJonT67LQXgsLE64wdtzGtXbmCas7lPij/Zhk1YEIMgB4oXA+BWr+rK27Kz918tcpT7mh0+QthYzo0jvR1bfYLy2JqcEa89O7cELrWCpJknQ9AZ7P2xi8U5MdmlZ83Q+tBBlt94tDnvkU8HywNTC4vOzShICQkeBCV2vxdHv2rbb9rOk7l3N5w4wvtoGDfBEgCKYQvyIQfk3M8O1edTKqx/OvXBqlceSKTltdelfyOgx+J64PoSGhCtJC00cz2siAfviyaABm+dHWGO3GrPG7XgNAh6U9FaAYC3ayKRM2dLLTQHCcOW7Sy/yCsFNDZ9/YrLbi/askDp6qimMItOL7gg1Jt0xc9pYuzgfv+E3seE034t5DF+FOAmeOamnuCZGutyQ020ni49tGPH88jiw5LIDlDEKYV38BUJ09XDQ709eMJ1CsTkIABgBfsAuGqdW3W/VBJQhrQZKUypXwDgBe9XMhAXC/LdhSfLmTk8qnhFGNVfVQ5/FSPuD9svKgQDxohrwABXj4c0Hm2N+pYVl/70jM/ILkIqX4muy+cWSa5wBzN2nHiobNkb6ODqnC5RFmwqEsWDdQx8xZFt384Z1N8HJncfaz3Hw9XhnzTWjAVgT+gwVvHn+YFsruACRBaPVMM9J7Nwuidmk84Vnuo/X08f0lhG+U05TXA1PJ+/q8ujg4zmLtNGAIGp7L/IaYgQlm2nCLm3gj3q/OR3wIZrkJ1RCemv+P2WzwvT8vOx9+qgLjpWW+buL812WtwbMnNgzU3EM99r0KFd/jITgsKnwiPcqOPyCe3jdZoxXUPTs8SzsFAUQCULHBN251uczxwB/cZy4UHrDq0tjjDjxQVN+f7WCl93QeX1gDK2lHZ+c7vz2KFn+LynlXgkfjmCZVXvpGg1HrDC09kDPQRvNzqYUUK9reqXV/uSw4GBXcciOqQBZei6aFcad4E+6CH97hOS3ueLQb6DObN1IBkRybZnoOWpqmuEGtV0HszYB8LZDOYIVFvh278CDWbQgXuimbICQmIFd9x7ZqYU0VFeCYyNMwPoT/kEJwar20cpiRU8qxuWZ5tVee6G/kbXdNSTrd+lH07/bgPqfHHVyX3+px1opbBSl4Szygq0JEO4lYPfNpdH0JULXnMDC09cVIvO4egUC7uA33DcxWtxc34N/UfY1s2XKVpSDohit1da6fp+6+1XrYz5m45pNSiMRQkc6IhIX5A1EkRXwT1JSQ6T2wHzeZF3RwYvzZfaRf4arC4LlVshKr2+a3/s6cKznxJiWIcAOZBZZgyGQbx6f9ff5P6yu0YQzhS9zqvdRo+zlWl9emmJAKf3o0uty+nuSndLNFUCPgO5lQ5Z/Sv+pQDCs2IHys8ce5aWpdro/osDTwjTvbdBbszZOwspcOiGhzgiQChOTYvRAbAAveoR758iwKRMHqL0wzeph4xYlXeZJe6qz49oV3JsqTtSHOtxK64I2FtUO2pvl+OiLzxnVJxgDbTJ7WS6pwATddCw83yyoa/03LvVhRljSt3vq+bjKc82jZiPaE76ChCHmTXInneNw7TRc7qVa3U7SdVpxrfxW6MJUH7O0t7VVMzR8OOlUvaNsEJK6Rma3KEEN1RbGx8D4Qo609vf3RvfBKvKb5vO/ql1E3ln/fe+X8Rj51u9GAmeyk57ze4HkvJrJ5W6m1VyE5LQ6k5y5UmgJUet4Aueaa/uABYhNrBtBWQp4hMtsbZat6gCjQJ0w4Qa2NT7ac1uJptbdhXp5GLLXnxEWQFP3M/pIH2o/HUecMH7ktBHsi5j4A6XsTxVD17s5BqgiH9HSTh2cV4B5x6f60r3j2B8LJ6zj6Kk2SbjW7Nbyh0rpYZ0AwbNocIgBfP5MXxiltlyf/eagVGoSKeCAyTOcykmcCeMMa9hlg/VRfDLdjhOeErg5WvMijH/v1Fe6VUoV8KW1FaH6lEv5KRD2fvmxxY4TSDQA0zf2MKBS19kjYC0ncW9F+vR0P1Vq5ALwz63zbfOSYRIJ9eEsytyHqaQsEfkgenxO2sNWOMMt5HYhneSkW3Mit8Nj8HPR4nk5FEQR348axroSSBCv9uRiNT+acmcsapjp1pTeWg7MGsBNnu+ts0vyv5zs9UeddWiZeJChJgLXwwSDz13t184XJJpzZYIUAVgMlMtic1JD7J7MJtyLaIgGqeSPWxyOqglj+EL8Yh6b+hMNePDgO6ozb24Qlt4o/FP31cTL8pmxvWwEwqpNKtWfOYzEHgDk+cghWf53et0jDCtBQQhVQ9kROyn08bDWYIunb3g5JfEZu6b9RWxxkpV6hji2OxSDLsseTHutaBKu95hQ2RDJs1qvA0fH/Y36zbWmpX1P1GQX29EZJD0H7kGLRj7DRXHMyUWb1Czh2NjqOOhKx0UODfTskfpMG7GeHr/lcu0Psccx5+vSQkUzrahOM6IpUHpbuSx6siz0+qbV7zs7isCamv/iEUqLpi69vWy7HtI8wGL0JlnkpUvzupeXedl85b6X9tx/+oa0JGflLYCbW8G6Da36HxagTHmDk0mAtf0sp7WXZRKEsQ+NrTrESSAbfE/zI3c/lVlUbWF+feg0rsXG1lVK7Q17Kzg+y5I74K1GhUQnXMJ629Qwtd/XXhz991YavE73BEz1HgNT/1RBe3SxZDTmk0AWFzd8Y83YsNBysn0kWz5N7+NThzEc67tqT9JWN45EX+HuQeXezPk1vBkHrocwSOMvLAk7mBc5WzadNim3JeVm+nz1gAnhxZXAY13309a+hQDbnAe2C3cLeXBr2G5z5SLxnQ3h1InoLWYOlQqFRfDPugIrf7e5tglxXmszTIMtWOSQeUq02FQmAGi8PJis6ugiv6WL0JLwmglvdhbJftFIVmoDoSp8Qu99OBtYhbeZ6tzexm/K6H8JC9t7k+KcJWXsbOlXOD8agjKKPtGGEHeIoeKLQf4hgZRUGP+93AGzr+shr8JGzAotOltSB2/zzQRjBUHytuVOjq73shE6wt2sh4geW/+UaZsD7Euf6IhCVxhJUIRC9+YRt8W/Vb1bBCMH74Si1ct/IAoO7HYR1Dton89IAe0fvq+IM496I50nLOyRTGer3My8OlmaUwFfvkedBAgvm5PTdd25/eDJuVkQenXV6qQf80cdwqd9R7J7T5QPfk2/wq2V5RW6hGqo/uFoIKMVYrtO0LMR5XbR8WWletzQHot7XVxsqoNuIoorel0w0L0N0wc3NQqEakyfu1QfWGiYlM74ZQF9F7lbdJaau5P2ppfNXBJjDI9pl6XV3UL84h87xckSvMX+gUEdBrXkpjy/+hZFyF3xO417DKZeJfbJQxgHv5uJhorvmN3B9NOZ9Ph8icY1GX5oVqPMfAouF7xIkAc8zs7B6Gh0z/d7rVSr2zhppv9s9Qm4m6hSau6i3g9KWTDAyAHJoHQcgx9p+QE5g5XnMQarWg5DL36COGfK5bcry8uqUYMxp5i1EkUs/QDoymDQqAA6KG/lHkeoQAtjwjtFKWIiPbge4rbzBkmK4v+mpa2gdfVFduqYUbfmOJCvQSfM2Ja9IedYZnK1LkLZV0ydlQ+8RvQoKwPNuYhgLAWb20fOGJBmKuq7qhs1XGWbdWQ7Adj02X/BiSsxCUUDWQvbex7QI+bZoz31VNBmMlQvCCrBP1rgrZGdNsOiVOplVFutv7wh9YXDSYkg71GIZHKBisn0QT4CKsbR6MTSnH6d3+R3ZjxDibm1Mlu954mn4HdnNLCCmMDy4nRCv8aVn5RA/1ZDL/Yp2AA/Frss7LYKA8OCcKRZqr+RYLZH0DF3fn1qBf0xby3rNuwCHcBlEgErPxHo2iCscCL2C3Pe5Xb1w/23Nw8NIo8ebct9GUUSQeRJlXJEma1ea0uQkRRb33KdnVMPTwwTMMzsuxpkLwW5e+gvYKWDE4O0V8+sJiAfte6oWMCwCqwr/uu4tfvOcMt9kU4bIB6XSlbLfAARY4LP50ZOfj3cCNz7L7kamTnQKL6+STZgXObOCBup5bkvyCf6I8Hbt7OfeShlyclngykcFotf5X+ojgBRqqzob8MIyWXh+0BVm5uIVYl/wXeBD3WYmezIHIP3wwVfnEMaR7827rB0t3GRQQYK4L6pCDDgARFC5/gzjdwGV87tQwmYAByKCrpVXR+UNM2HPeketZ09U3V7jWt90WKkHWQJWV2zfs1bIRQEgLxmOhDtbVvG29ZLA7jYCTqJgqlz6Sj9oJVi3KJGW0/2jw4cRjsYzFm4SS1Ez+hl8YttGTrLRW6g7acJZfqSMdd0vtaX1ixxkkB8x2PMEb951vysSwpqqoCs+Ul+22PMOolKP0cuzBfjn6n7HXttAxdRR2/nDPB9chYF8uN295NHUvEnEx03kyyU8Vtc6qVpXkt6JPJoTeSNc9vqsjDZ/qXwS/Uz3/MxVJcawjUPbVxEaz/T9c4SaU/e97kBDkY4IkpycJFXFnO7sMUMdkByZNTxz9kUpyMk6dTl9efMc+VkOVwD48UQr3VmF0y4/0c/T6XuA8G3suBsQARUCoF3Rq4NRqzGd6Ta2Rwcr6lfCWLA8uIMDbYS2PeoW/jrm1/G57v+5xwA6/k5/5JZBW/yUcnIlxNLTXL9wNjitxdTHoYbzk5HIMQrLOBAWnIX15TPRYNh9NX2stKJcH8aowfsk8lBX6ei1ZKJlrcI+CE38Qhtxxr3ErBItM8oZVqDArsLB2hr22Uv87nBPWwEHGqBNo3WyGmsXXGJN4vVN+GjuLV1IsRkfsRNE2PWpwjMcwF0JXlWUvDwREXTHgjdkhAGmMlWne+8RrX8mTu4IxxvBR7pDjdfHFnYSU2D7215+mHhO942ipKcXiwa3WarTS7l6Mtfq3p+7yH3flrqMbwKg+Q/UHwPI4j9RAFkm3V3uI4AsKwMAyw0AFt32Xfgx5MOC1SVHfH28j7m1N1UKORJAzXOsvPkbDlGXzZurbO34ZRfnkR2DDfVu19wK8Suk5ojYaAF6MQ6BJyjnA0NxZT8HWOMcfb+ZTc970wBb5W1Oa2K56UGdAQxzPH90oaZKvLLKFKBXRAhfn23jLJZPmK4AYhXG8I5xgnx0WgUn/2/Vt1l0eFCaBUbt6DscSfZaeyX+D20LJoMECP+19IHy023tZVd6UMGGWHdcr+7vp7VyT8B/wVnDLdgzMM+IQGf55b/1eHGT0VthuZwvHXJHexmlZw3dB+/WyS44Z/OcPiHCj59uA/HS/f4Yv7SH4fZCMmQZ8u/swrzWvVOLHXZPGD3LsyEyof4nvbOWguwJvCnStkWOuy7CtZNkmmbFiO5cEi1+vAaTIokse9l4EMxby+8mfI9gqyYPHAYwufPoBOdoj8bpotVgz8DaQuIoCeH7JphMBtxXDg30q+JabvSTrZCwK8UOfpnhI6677wYVSPZlDDNOSUd2VymjQtilParYAoAhp+GtFVQCJ3njKPoyUqL6DR2oyQfCGbvsbGvve+vErOCEU28rp1dM2Asvicqc71M6aInmU0QuKRmuW9VWZ1LoAdW6JDVN0/6ZiMTOfO2xEaxME55GK/EBgNqmI2GrPy7epaJGxhuSWSHEr15Wk7V1hMbnJYN0BNjMQQr7bjZOrQG0doNnRl8WzczMbaFpuvhmsKu7tbbytvI1hrXr19P+jhv2aEMQNB7f0A6tkpxZ+b32WXQhiPJrk67sLLco2mqPe+TiY8pw3ndvLReqYi5jFMANFT3G7W2TEytCUZ9wt52gtOSM3bT7Zhvm8Mr8zXa5EwPwZOokiJQvmnsixMiBIwcPytr5W1qrcod9qdentEoLHTbvmotoTcK+k/Jm6ZbYHGRS+bYNIG65MXkMM3kZZOKecUJgKJIgnjRxk+VnHiVOo0jBIm+Lo0FfomkPL5FYz0DhDT/qHrwkGkbEeYdVxxVnRvCkOOUW3T92eaddM3mfuju6CuWYWZZqGQCG/SDMeaHJpQzrnLZGYLBKeVgqwDxbE2IV66jO/LrASNzix1jKOC1UxFzZvK/McAbFV0cqpIwgDjIh0ywxk6vpFsozDvbpF0EPk2roI6PYz5PpKTbNqT/iT5mXP0tHELzUczOMWsQQ2f6EwZ4pbR7kh9AEru3HaPtYYhtfU3fKp9ccJLwh1Z11X/mTV/MyAnF+AwkopOYdADNMuiM/mN9rblJzWLGofVxpbW9m/boPRwsrjB8DzvrP8eqraNfFFW42+fz112l9zTBxCdgMXlD08QCkNLZParzdVEZ8SpLT8Xum8JFjX8S3mz1iIrJdJtoO2HIeI7usp9fGI+bV3hfWbax1ScE4CJUm2OFhxayT0wA16/iA+mqJf7t7tIPkCm/O3LjYQ2dPwz21Qh+AfmRwvp6Yyrh4mVdTEW2eKcHevzC5BJHVzluKVI5x4ZCcniTi+UmoK0YZd/XEbMYEyUSv0otmxsVgFrLzlgesnfXIrSC+nd+xSuSwVjzdnkgs5f7ymIyLqxaWN1B+2Yp+hX1ybn6zkMV7/TAMS/NN7fNHCegTmGCdtwb1++QGgkgl26IG9bYPpQPvYln1I/ktuzHkvhHJTYjjcbh7sdlmeTJ7dGRcd8Pzf2tzRvbphxiLNOn4qP+u+P1/IS9UkZ8gAkz/Rc359ubX1cuNea3vtXTHQnJkXgBIgYMX1g5DS01/gFCHVHgSiQPdlcPpl3XfVtlyduMcK87Efqa1wnmMSevgSy+fMAmEQiM+l2N43QUheqFngfKrn4VcgJJtMF2sZZrILP/2gTf84mqe5689W+8g9emPTEMfigwLCyqeknfDYfQnS2oeeEVQx1B4avBp09ohURO7Vb2iRCZdvq4dFRSGGD03s5QuwyYIfiOQvtLHe5zCJ8BiVO6ylL2RDFMmwhdJKZT+6Ep/wGrLWvhUUKghaqUpQIwolX3UFg8IhRzp9Wnc25njCSSqm08Nsj+8tuSWS1YnWhMDb1jhR/dy0rgm+In90fBqIgaTZ5LU0EIhuGFcPhK/LzWF5RImz37+H4lDJRNYeUyZF7d6ufgeljvpwe+uveZvOtDn/n2VDtqYa/y2gofREnxprMv2hjdEMastf2YE6y8l9H2gW6v9D1Uo3mRYMeCMXaIvthIDXOh657lOJPwiwottX5K4izLvW7iPfgs6bMcherh7/lW6yHQe64wk0u0QLOiVkIFIu52jVk1j4gzpbuRXg/wRfJhO8v083aPwalvWX3gTIE9AZ4SFRm8UbJQlZxMPAGhVh7EK77imFo3s8jHkS5DZ95CgnijyfL5flsH933Sh6Rhpy/hVzxojn2vOL6e/8+WXikDCN/CoiNuC53OQUA6+3ggFm63ufqfuN4626tGivXNmpb392SPCin4DnfYrbZgjGjr8fVK6aBQ5TVRuK1v54TaxRyQVBTflNdd9h1O3IE8EfuoPInJZTXn3dtCmTv2M1LNF0TEZ33iWP+QnkwJ01fKwevaIhTtFov1lfjeR3lYqrXR/H+KvqKtlfI6v1921uH/Xhf4nu4dMAtZetP0OSybCH4ZfGDk7v8RlQOUdr9phzlzCV7C0m76JpOpGpFeriQ237fmN+tePpw3k+Du0x+KL4MomG7xP2PB98EusbNBbkqE+stEJIKrDOvUOKAjhpfF81t2k2kr5SFZYB36QvPRKc/4FXRESWO+LVG64PEKctZObYXhBffYNVhb9X5Uu1F5BT64JOrqQ3VZpTqJ53U1irdDmwPH/1lBZVhJ4HGAWAc/jaCr+ZXykIcRZcatMY/VXXkP3ZucuiV4OFoFubV/j9Wv9QsUOv0N/ByaBqUzu9ctHwsMokk8MgnnmFJxE/u2x326ygA+fl9XGHYe7dIgYDK19NZa6XbrFgy6mzCpI+mhPAVKSuAivC6q2o5dBrO3+cIVgCHMnOZEBlzC0Ec7oQFCo2oH9wAD7lHXkyao8Xodvn6YMS8MQMourZZzYbypiFGPmSVx/oK66dF9/6HqewJ/xW6jZNaSiD4iEUud8cLT+hiNOkRYNNWq1/kjtL2yG5dAKvev7foYRZe06LC73G5pm2ctz7h+k0qpBw+Y+um51B3ze/q46mWlVdRCUcrq6zoaRRwR1fniX8vccWJBz6n3itpN7Wu0dKgt3x2+k4683wVMKLWH7/+lEzch7k/FueQk9tpwpTIJv7ffo/Uirb/QcZpPjSVuhEtH15yJd/X6IMDSMYJcAnf0k3eJESGsh1tJJiJMpQKJRk5/Ccr+mo6yOj2KShbGVx1AYmyBYOk7Y2NCv+81prLRRbCFRuCOy3od3v6f4wiPrgrowQEv/1HOgACUbk/hezKWQQoxN8w04Pvekbq+3zjlcaKtqUz90j71NAM3SkVMRXxyjYkMeoMbkTftzQA2A/O9QSxLZdMOiMn7oOn5HCpbABXKV1HxqzGfdUxWGun5h5XcFo5qgmAwQmmzLQr6L0Ntwex0bo0HR+3XrHUk73TIdkRksoYFlBwS9qcVAkCEWiaehHnTCaR2StVb6J8p83YmwxYcLYp7CcgBCuSRLFjBjfl9mOxMZAt4HqugW9Ptrr9rkh6p7oST28qCvQO/rna2wG6qaO09nNIpaJMDpML1/MoFAIO8S3EYqubRmI5JCETjiOrOhm9c+QRvVuAwU1ZOx47p4r8JOFK+ZTp1Yh25Z+7i0cEZkvrlKoKtkR7jK5bYKdTFeXysGdyo134mw0rGu12wtbCDZ+fGY9NsO0NRHLixfh+PT9nFAIVWdjC1B9iALWa/XaxNvP/4cfyeFo+aamNEcuLxNmtiGQRb89T43RK1G7hNnAlQoesPXyuz/qhcCu/0231X+/bDumNiyRAUps4NHyjCAyHk660SnXYEgjXV8vt6ju7sQAohwD26Wct7bFCDXosjMkhYNu6VYlTZB7HsB7g2bowCoBlT2BWwcYAAeVQAGKZz3h+nrZ7yjxvuBmVvNxycz1tK1OxTIIryHS7ne6I5yez7hjAcboSvn8pE3DbpO0zcEuQSVSfMhBoqmW73abx9DcTTpubNswyV5fhfeDf/82TsswGDtGRoVRVHazZpRlmESKMrLxJptYCdnUTkT9ved+nDnewT/bkzD7WNsZABvZpdFgF5AhNXuOS39zFU3Y8I9n5+DaKJZaxZ5iQcTcwrKLMzYo9H5DssV4dIkPWxvJkNTwXto+LPHj7CV6vfpKGeYkqqCIm6YYw5hznkcOEPX4I28x7nrFloGWEEDuKhmxay4sY5fPSodYUAuhhUZ8azHwEWb8OWt6/odYJTorJSqxmEzivuRqdAB7/G5C5xRGyc0POrdT9iGldmDr/f+8MTtpuuL+NuNInCCyWwaF+ht7NdzMbF3kzHKO83k+i1rWDMOdrxMRCj9DJjVUG9fqeK9FQLzfn8+lUdh2B9ZbT8Pk8tsDPFhgWTA37Ie2g2Goqp2Pw4kfcfr5/ZhPBu5yz12N04YS7fyltDL+sqTsY7quZXN039O79R/2UsLz9p5x7xgLcaq/h6adVJGxQSRNDTW+nsb0lODyBSN8fOsV6OrsZixeZh4iwX/8Qq7nIPg+YoBeq/Xa1aryyaNOqZevx1SvSZQ6JSd2RqvIp35DfnKU7yXmhcZbZmtVuKahacXL2ueZvQIJ9fhecAMyoNeQdCym++gH1jENF67ZkuPw7PJ18cspiBg9fyXW7ROcvhKmaPv2xOCgbMHsYdX2Mi3x1VV5W1fnao8DFB+tgJAToJp4zv2IUeXQVWRucwSPz0ao3JLHghetPfqs3J594k5RmuDbYtyxMVRtF1tAhtF3K3MgYIVSzmydqFLxXtEKdp8mG6G5SbrIhk8vYqkCVPylm8A6PF3J+FeHiH9PKS4EG4bh23N+hFbOxcfPKovxTpsnToERqDjKFYaoWCPL9ZLjtZ5fjzqoXaiNiVnkdi2KknU5TmZqRNFpdl8c964HNbLOfbUpT4OmMy6O4Aa51eMk43rhhwd3Wd4P44+0f+D+3HSZCFxDTVR/tkQTJm/EmyZ4zJu6ZPmBr3CkPTRG0SpicFTf90WDkVxv5uuO8roROILWg0EbOluUKvcw7ojrKh406RFAjJ+BRr3t6cEomROHgQJ6RPIGj4XUexJUnMP65VQY1OgeCpDLhZ9rXK2b0RxtrGiPEtYvJdzd4S1yxsIv3LEICyl5ngC3cGUFE7eNQC+zIWk0dKLwrzObNiKyRx7Z3KFYc27ntsZjR/vGrbYLYNeQDqmqQImzpQlyZIEEG3eF/MA/PWQxXLkodIZ7GrPbd5P9bkbv3Yf7o5RVvX9UJT/pk3U3VUnk6Jhtc7BXNICorUZ8U2hIvOgH49yE77LnfU071bZTxed+e93SLNH7rw1XEBNmY9njlQJrx9SzHd8vbXJy83j2znzCnvPGK+33/cMUPJDcm0ratNnii5neiJETFjKFzXfNbU9SewY6PzBlFP3UqeXrD5tSQ2xSikjERC66IULRFEXvw4Nd8ACXICJEPAUjgc86vgcX5gJHnIrcIRf3bPFicOwjLDadntJ5fc9pd6dpaiqW69+2s+/heHawSwkGYaxpZ6E8W9OZwM6x24wuuGVlortQmemUD5AVIwAJdQhXn2j8svTIeKzZqVznLFPneU5DQbg8Vyv+mWI7s91X2TAsRJZvArhb5Q4ypOhlQSz381VhQhXttimLIFDohUciZEVAhvqmD3eAI9fPbdqjKikmvD1Hauk/T49zri9+la2AcGAPV5v07lz/bB4xJp7xDxJlHjT0nSgIBPDOdqRFQ8F87+fSpwArlwjpZN5djYi9VX8eEQ2XloPBw3/LFYURWyFxc9pXW1OgKEqnoLMVmqnSp31OrFSaTjC5eEEnsKjdcL+wrUNMQWmUjyuO438/8Xddy07ymxpPs25xwh3KUCAAAHCwx1WEt67p59Mtqn6u090zHR0TEzMjjpV56+SMJm51vq+ZYHWtVO9I4yzW/D7/rwB0EyS3O2vCXEij5JnuTs03EOy8YrBU6ky/7sKRoDucES9zPb8uStO4nFwsW5ir5ofDlbXPU6+WnrQ3m4FdocnsqBj/g248tDXAE3gQwZOLewRkMFcjzNaohgS7S9RjSPvmsiLIKC6Rut9u56JZbD72RvMF9LUYRuFwuMp7z8NugP8RTelf71MDyLTxI/pbpdL4J1tmCImmenFOBaTlviNKrF6XogkODMAj/AzTHIOpGHoYS4eSQsQhzDyn3x8dg5/8vGLM1HgCew/PB9DhMGxIk41HN7tp8YCrl58WWhzO7VU3b6UlzlpnnvBuk5/lOFSK5CmSwjKzdS9AiAWvbTgfJid6kamw6iGYUo56S2+j/0UyljPLVJ0gjv7tfqofK8sCkqadGg1D3ubQZjP4hP/fIoabAwnpK7c4etXfh26LU+YdZ7lWaAa5K0IwcJP1+oYbUil2YPXz756lA57ecCuA5vyXDatcjCK0v3k1kft4l08YuiGCth6GJ8eaSQSJmMDqr2I6Kfw7u+vr87mNjl/KAnfHFfwD4D1kkRvrDus3ZJLMtnlZ+esL53/BBdPEZ4nx3pm3CuH9eFQB8IaoouGFgwc1ilClnRmp8KMyPgOGz6rrA4jAcPwuL3fty0uLiZ0GcVFCrQB3NOxD9r8plxeZ0vX8ilWsIdi4N3esPuWhEZDFX8uOtf2SlNUlZsCtVS82iDJ7gWR+Y5JfcZ8apcDBUw0cb/w6Os/4dG7/HZHOB9d6iQY0YVS9HHW+kHak/367moyl7q/DsPpQ6htrUkmhmFwAbGXAaPS6fT1ZQbPsngzz1mCAJojiVxwVAPL1vw6U1x9jFGu3z07YK+HTfp5IfzVTR3vURU3PeXT9nAR5n7Q/XjbYf9tuLJBcKFNgbVPbI7qiLBX6ThiZ3axjC2P+eDm9aM15P76mbIM0S10P1aWxybP240FFsuBowoE/MuTBYX/+Kq1b1K3+yQ7+3xyZZyT7uW3h/P9I6cI6Zch7PNsHxSlGg46RQDvQeU9vs9EN6j+ZwXZ/moPb71IVw5S+nEO5GKxmWre7/dywb7aprBmE2JaNcNG4+eMRaHEgWYL83jEbNTt/upNb8Geb373DGxodaBNagwYm4jA96jzJfKcIFzff0lRRwunFsRRWx6xwckUgf3roZ5nFzuWvQZiMs8wbCMoBXRCYXfYJ98DYiry2+VxE9injZrNX0+x39ltmxAFWEP7eUn0MLVax0c+/decezHw4aP81UH/lnUji6AdgkSykPFrRl2yP4t6ey8PjP/TI/X0PRL614Sgn8+8FiRyxBvEIQBcqyzQ6+3HKPYEIBDpuj64rC/Rv1vyX+GaM5fvfKr7d5+qYGTqB1RVPve4bmd/06gGMnP25/SAuRxjDNXRT/vnbRlxgnUo+0hpBOyoWPPpIo2+sGV2RbjC07mdvXeLS36S2M1FGUaz8ywTNQO1/rrQpCyw6uhz2R+bSOKwe6HIV7vMaVIxW9DjxLUKNhVJLG7vS+eUf9Yv5PjWaXWZwo9G37d8GQ4+d6NBCBcN/aDCT+enL/+ACfvQPOS3IWyJT5NnY5e2hz9pf/trlcWQI7UrgsHZrJdZ/gB5gydZ53geeStWrSKkJp4tSxv5eIyixD+ue9FhrfvnGlKHwu60Bb6uVyP5bpbHCKwDiOHpf0OJTHrDJvr8NDEn9ICodYXeyCQ5JUIwncH6s9X3BY0q+k5G53xKtQAYPrUdl/uafipI96v4HmPFerV37vpqtVH9x7m2PqSkha4r2Cv0a0BPUPG8Jyq3dmeXSLsrtzfSm7Lhctmfmxb3VRMrwNzWUeYpahiNnnEV2FEePN5Xf4AdMF8DWKOVNqWz1+KfXrRf3TEzFFFIgqCooqqP7pCBMKZv2rfyyZgwKj9QQTPO7nJQiyUJDa7DHsnZUeGvq6RkNEsCMDUP23WhNpj4rL1u7SXdMd0wnPDpsFdwadOctjzww2D0Ajj/9GL8ffzxPlQBl78+r7yLpk4TG4AIYV9d/oDW+OpYj3uYlMYpQBb+i5Yg2ZD+PMn7HjM67EfqDUL5XV61H6gWe83ALeNwt/Lv3Kj4Mz3+kvay1YGelXKet/NXOszEJQigCdPO8H4ZYmkRxlN9vDczzQkNxXHMxyitZNzob60AaRRKQpm/ve/5QRzwbvXbTLOLgX+NxwDrc2GxPme/elcwzh5rPSp93s+/H+cOe5lrdoLftlewKxiOj72RL88CTkuPxKoLcv61Pp8Cm7tO8fdJfGlcx2gvbh1VxaIXuAL8L4RnGjwGh99OkNo8e14WZotaDpf8JfF3TX+jIRxhevbKwXGEuJlXU7PhbFMPAKsvv8lb420nY3bYqE7BPap5vcYZyB732xX8K0rRT/pKtZ1DxuvO6cYoGhI/EcAIIv/6GZm8HdWHyfmSTXx27/qoLOVPvMOVLmXu2ap/uvjD/HBexr268WG/0RL8dsQM3Oij+LpWrqvFhW5g0b0A3XjQffYG6E3oSYaZ+myo0j/LrNg5WSkAAv30xRQogEf0WKvX7PrV61vr6OffZ/TBaEAmGzswP+LrnP31o+6EpReWf1iEW97UP/OmfiTlVifA/gJ+IJm49M99y2Bt/XVN9MQwNA03oCgSE4bHU4tr9ZHiKYr8+fwHmtWFQij+MoMjRF/yF8Xzr56dxQiYBVYZJrEOX+MU9lUVZV5cf9CoHj6xdWHDeo4sJuo7LJxUbJazbrAC71LJUVXvMxf2H3Ue1wIGVw26IF7Bdd2KF59DJSDR5sEZQBEkNF6WuEziJ8ol3ZeLjliiu9oc1wTJpK6veg+I8x2ohTVpwSj+vvCv69eP8n4M12GAPm4mlVWVrAi4j5E3OmA33KctI5SDwl5QwgB7mHYumntOrXw6hW0NNvm5zpXTt75fj2mJPbLu+sAXPcrtu4RMz3lUTYNf+pFWFQ/Ypjfsrwf+1vn58lW6o27gYmDLEQSKOkHLVtII+PCIeFsJecX1vNv3Z9lrmGnjkTSGscxjviwTtAR+jzoTjjd1A72Mrur2romQ6e2sXbDJrzaPQtyPJFBs58/rOhj9wOsIsLkwIx4hWhS2s7eBtWF6uAqx9ugWjV6kzTnLXJfUq21zitFn+fMsTsZ5YhokuuSq6dy4lcYORzPz48AR6dy/1Wq/7LVzKMdjdyoAp4/nKPy8tTK0FkOuKg8HDfnfLVbpO1JFk9e7Z0NzqKs2hMkqsx7RSmwJ8UW92z2U67IRNqpaf1blGgOTU+P2siyK8Thum4zMVDpTbrVvUXh4IZHh46Bw7zuc+sG9Lrpbsc/SOHtbemiWiH59xh9YEi5CLrNZQejlmMXvu9RBGU6TMk7wXGbQDwxcQddCDiMi7ShmBxP/nASJI26jfva235lEE4/Y+eqh2kcAfT0A+rFglM6oKCBZWVp9tvBeNj0azT44JJiXTq7H1KFslg9nCGW9zqyjdlPc9xL+6vLGi2n2VBMSuDoNus+anw/uJ34a3+fwNQiNG4gqQc/+Thvss9qJFFU3IqGXCziNjWDSuYoxC0UpMxSlOqG06j55i1t9qNpiS8JpOgzv6fBMRpniz3oakCaUP+X0vUsXr6Z0E0LEaD9JG1Rbymat4utNGWUsmpi0fj3RKnojN4/g1p9XY094X86++vLPBgwHnT6SNCXMLrA+4tOv2DqcD0KHqVBpmt/3BPFJ3TGY6IKkU1V4OZ2Eh74Mr7CBWfdyQ7lQg7uVvRIY9agx5/59Mp8vqAvAa53x4m+1HMVdSaY9zEGHUkmCn8N1MZgj3PRBN+DNDT3EoENQGxlUlAuIXGPUEam6x9C51EMnSfiUbx8VD+bV3vLEe8z20uTPy8851DelX9VPUJtAbuh8KIqDdiQTm7oSiwG9s7tXq/NVuCYqS8MJfy4XJJ9Zspom1+H0elZigFKz6HYTbHRQD1TaP4egwpqSMScCWCYEu4UNH5JBxOJHk11EDqzyJBbvvbs7ung2kfh68aA3CjRNfHnF4rQv7bP3litGh5d8T5Oh9ixfus5jAJ4aldei8I9tL4hZwFPNV7Wzo3ukHKZnG6zftQipdjNqkg/9e61XIVoV8rgfMOzmz1RNhlaadc6eyuAskfMbcK7ZnnOwhP1V8pJCvvs9QWyMs0JsSsepaaEaQRda1rYRQBWF3X6riM/Teh0DdGkR2TL0Fw78JQK7kZ7aLx6WaXqpTSZ1pDC/dO740Pv0aVuGO5mF3X9dhr8i4TNAK+yS3MRlMRSYmmscdCWLZusAvTtWrca1rXJ7u31lbYAbEgSMmqkok/r8cayvG6IuVQcQragU9x1Jt7q1FKJrL5T6LC5R+q1VHe4WHmJ4vBX7rhA4AqOHcBPIx/daM1akefO5LJS+0yHMNUUjJvUSiVvFIhz7KwKw2Jt9rUGkN3arWQUgL4+A55XQluRdKRh2pQCjRy0qrVGgAwXJcSNNXy6TgBnmmcvWO8pDpwr8agzwt+WFv4yxo440pxt5z5qvh1oAcBkJj4qvQnu1qrnX5iV3PdwfhiEH/6MY5F/fM9VIOohjFKqOc1KoXjcNYbmV4x5cxkbKoIlS4j+wQa8dJ8Msu1t6JEhdrMie0fc5uV2FAPVpw49HT/FO+YV2kQCW7mQIHKqmi080Wli7dTgO/EprXvpUVtbKeEL3rBTdtfLdATKQ0St6i1J3I4tbkP5o3sB1A4ZJ8xxrglEUycwQPwa4Sgq7dtJDjqNMnhuiwL6QKFTG6Gq/ehMrnNJrKPONukmhVF5RQYC/aeSMdC5uwHAX41v185mcN2lvWyikIXRT66I0L77nAeso9QBzHnn/pyLQYYAJ8/31crtpkt05sMvzy/0M/HjtiYg1TfQyNJ4uSPh1F67Hog6IEVMpvcI0MHY70KR+3KTzlurr0xMKeobNLnZ4oWfn+nKpCYsdDEgCchrKfUd7NO4ul1QnQqJ8sk/1oVfG2dZHhhxi8O5d/imDtE9SY2cW2Y671Yn7Z2XmvR9/Ao1YllyIO/hq4+G/4xlYlui2qBVEYH0/68GEW35yhXr9zipbBPGOYTvwREVdvuAkS0edC6PEw+86QMsuQfNBP92X1EprZ0a6DnEbIaCOS1Kjz5Xo6QO9QC8q0uyqRuc59KdI1OjQFYJYWbbTLWU92FZ5Nwmg7kKqNYuB7eiiwC+9pVMLck6ERkKa4dGUGsnsGy7EfM/2TGloXMUuCT10g+DKv2hhzzsd/gvdHtZu53SO926mleMZhcj850Cq8lwl4USY4kjobKID/hB/oBtXIPErkCr48+BNj/U9JivMJ9gRylVyo0GdAb0woZc2JJnleSUD0r3/rsiFssPXurohei15gsDUh0d2MTZ7pICgfjOI0DVO3c7hOwa3qnoIw9YscvbkSfaDKQaqYJj19wlqiGBxDehqD480uNjwNll+DuWelypUrLb93DnB68BRBwCp72FT5DACWDDi9yTGlATqw6uq4mdfu0zhjHpPSERl0D2hRwUadxG2lRAafqE/HjW0JnXeXbxOPTrj8/C5kFrVNIxzHMdewG7UwIgcQAoGl0xU9coHNTmZ9JLfdaT7VM1z6zAuSPcmmfFD9BdipZBNxKtbpI7xp8YSZ7IOH80/C1582Vb3avpNfA35a3x6G2s3tJDrS1yEVt2qwaP0cgDYKMviKs0BySpqMar9I4soTQ6nsiQagsWqi9eOQdXMQh7nd2blT519FaBMCyaSDsx8hBD19t8zmJdW4V7vZyIpx+si8i6s8HK1/ELImjFJUmG+ywdb4DT9MHps8AYi7QDi3s3dMy049oSBc/eQjwUZhubjZKpA/2DlAmHt8a+EGqG4hcWYPYAhHdfTMsWDOh/u42tf2Ivoz77GhLBTcZS934P/9vvbepD4K28WAj7oTtJi8gWFH2njugW+fFsb+wKsCRrbAH2c6UpQuuO03k33O9IWOeWe5IAH4iSjSTx/wI7yGk4x5CTZExY9yEQHFMTWB7u7MGfFPorBkQ2RLnHvEHAGIkoaeUUSnb1q3uLZEuAimnjjTAP5OoHK/mot+VHbqQqA14LSj4chHE0Ga3HgM6UdsNCKCc6ASnLOxSXqo/YpsvMgL2AIocSf5zAZuuXtvLl0U+Oq1ch0JO4xlyx/fmGQwi1dtHZh5jp1lqsq3LPeCkDgUgTOGhQU6/Z5VkTun50Y80EoDztESDeh9KPovtZr2PYENypijupafgVflhCg5S4oJ6+xzW7EYzQCPA0tNbZ1BUAhLH+AM9KQVK+sd2DdBL7SvWtyJUKTLdv0wN/CStiMKhuITdWOz+uMgjU0WJV80WU3m7shaZ5AKgkIs3B1or3oclxMeOq5a35Sh1EBbAqPAS+a/FfM6xudSm0kvrkXoqmpgwJWkTBQ/rCmQ3rYwP9CuUAZauD9Umf4sfZwCKzw1c3ff63322dDNctp21RFOseVn5/7rqv5Y2s1cbiX3C0ypPfaex3OPjJdijWcc7hqU/Fm5WGLFf2pJBVQ1d4bdrBkk91yiFC255JIykGtKi0qusNKMPOh0RDZnuu+qB6wNPsAp0+wWkkmd2YkJlM+WzcmBZZW2VLRy9cjZ0WjEqLgDF12JADEwu83gAsHxeDL6gsfjfHswmskp49tMhioGoCeF9VFbDllB1icah3fnzjY5dhpkUB7B56we9xlftPI3PhD9wFrWxePmpe+Nebn2JtZHAHBcZiEFC90ALTnzb2ZrEnkoSG7+nFImrgFdOrMeDhlRMK6/JBkhPfErL0iNmtj+WxIZF3LsXgVY/mRjJfd8pK5mi+oIe9qDOAvuBs84uxAt0RyYSieuhoQs+XjAPOcwwzK1fRJrroiat7b9we0x+3my3fI9jZKMXWRLVLVo/0edolwWy/AjGdxnY5+Ix0E7cl1WD+BCusUOIN0nnWACtNvs3zkaSo/dKnYWtjIX3B696Fw18/zO9FPOLqVcDCOhtZiCy78VbjWQPvK++4vQA9GYVzVio1RF4mgWKBPIfo5BlRcuzs429atHHwHydTRGdcdkmbhQBf3UocZRBA3jggFNzgLFht4FgaWBYQAyOgdon3IUgfiuzt+VYWeW+5ii5JwzoyqoekHV3fiVjGMJkh1QTDzx+x2IpjIiXu/iBaJIvWMWzDJ6dDJKzi+mDo7n/s+zqRfPp0lVo5xc3yBzIF2Z4EFSAaNtxBMAspUP7jg8YE7AUyKNwzH4QWdGi/r0gwCBTGVX31QmCQo6YkiehfijJNlD9hMUaiBpWx44+NfT2aeXE24gzM1A720TjZ8z3VUOflW1iVSS/wbteAM3Ovrzg1LQQMDEUVfXpdsiCeSLnsfKOUBDhEaS8tGSUZgL5n63pNGRTBRN3pH5NlOKcE5F6N8QEinUy3UKODYMVLRjb2+rgJf6nDlAZmgT2XjI2OhGbi+zGwFd90k1Jl44/nQupeQ8qHX+7T7Bbf0okdU1KXBr4/raarqK1bvj9IM45rQwBnBlnxp8Kr/CHySfg6+ZLszrPHGHzUwLQ9oOi47c4b84Kbil22gtTkyyzKLWw+H0dxlNnC8H4qKDQT/lhEbtUrYGiMSdmOI+evgWyUBGwoKOc4YIZYOYSuxyvXmTqgJq52wwR0qLNYbBejDphIX2EyJ1T6tCp460azTfwjPhN6jnkLjBnSfUXTJIO3BVOqhJtChxmKJ6GNlUKnpbVYAlkhJn4oZf7PQhbPog4FnWGBlz/Q9yd4uDld2ChST1vWGau/grDEyiwBngkyf416t7HRAGUN0fHly4NvUw5bvgovux6Sc7QIuLsZ5OLB8lIvARUrVg0oYX4S68JYxbsrLqvFMAJBVy5DGCXTPM5TwOc+X8T33cPrbX3YRr9C72Gxfnj5+JPtRpMpdvkWP2rqf/W5zWBpeXLIvfxSmVRGQcczCvM0rtguDqlpjA2HlXtA3KaLgLMgNn5xenvtKMSpeHXm4P14qQ6wV6WweloazJj9nlBsxEdDw3WDoawkhCU9baLCoUrolrvC1FhfDBbqmsVyZzIbWur3VkpqJjHg5HGAW+9G83ItL9WbkJLYgX12b7rDdrOrIUmVzC7thO2gcyi/i2lCBWtEBmLydMh9lBNq1TS/ukxbUZyTbq8ubyQtrFi6o4Gr0MqDRbHq3EcbQvFO+uHVmwUopxVw8c3lzYJI92WCp5+J+/CgdjOqeAwbZwAf696FmJOaYmtv362Ewx1iill1TWRkSaTq4+TF6kNoKV3jeCSCgF4NF3pd5G45qP/lcUUdaCR0ZPXsx9nBACB2p9jFBh5rzq4pTHbj9qIka+mG7MjZ0JuM4/XGk89NVwcNQRfh6+djN2uqp547b5faCfuHbsxjgDfa2Y7NMdD+otI+TMNQPlAhctAuXLj2Mqgnrqzyu/sFebr799JZ4H8UndblSlMywCo14DBfVgSye+we7FbKQ40C7AnUIYtxc6FHxP+u7YgRsIAR4mD7ZZUYUgHecg80f0ueoCaeKdPcRMOa1Wx8dLDzBuGnj/NvmBxmUOhUJIGsbTJWfpyPxOTSeyg1Yuc2EPgH50fgomrGvQOiRSAG3Shf1THEyCEAbwTsD9H/bLY7QnWkuZ7OvmfyuuTSBPKAXBIHOpQALVILZ7uO9Kh6aM4e6X0uD+Vr41VK53JmprRsPb2P1F2ADAtdVbAGnrCKWvgCDm/tDhbg3lnuhmunA/EbFvm0OnNf0zTEBNm+YjEo9kpy1ZVBP0s61nnwxSqrbcOw+yD5fOAru8aWr6rdh86QgVOfVuqCK8Foa5OY/X7frk11lDrqDKK9I58aJ9MZqvSEcveyzTeoTidPP+jS/baJ6A3JzR7JcLnUAeLDIPKV95ToRZhrZ70bKg7V39JV0NKYcRr3aHqP06VacEwd7fBAXl0SfGFE20weBehqfHofQiIerRfvlKUm+GkxDZcTJZSIPLU3akmmTi/pYmMEBf+p45KHw81W8Ljwtxp7xFK5v1/7g6PEIByPBtV3JH4e6I2Oq47HT+HnWp2mWIeiAkgTMyUp36riP76srRP59v72/PPp3TrlDSCAFDhOn70JXXaxNKTd97zQqJI1ehTZzrPRdwPAjB+fgea8UXp/OjpcIdoQIBfN0DKkgMnzCMxO68gponrMaTviyIFWX7OisIYVt8d66S9X18/QUXszagQZg0bbci1ITrmbpmwn6fHBDxT53R/Ndzc5XEiPMQWqERbs6ieH5ymOMBqMOVyZG3TCR4pik7gujPoaXOIUJenZe//ol94qoBrmxXwQzER1BdEUtOn1KiioWBD1LYIF6X5Ka6QGnlE0YgPRJyMFXwOMJYSA8QK3wi/LFurXzYEPrjLAGq8PUV9928tu+llzF6TO9qCs4Oy2iuoD3PWZPTQ3u/RzPUw1nHaNR51pIVDj+glGaxL7WBA3o6BGht+PHre1wohsNaQojskx2hA4DyC2aPuz7hlHT3sFsFjL5mHeyi6aojbR6h4hb7IEMkUM0C+tP0OezDmIPUKr6aYcQvFgHmE2E9a/k+wNcO93aB4nLR0K5I2YrkxdrIoA5cvkxcx+lBfYV9WYwetH39OblK1NZ3DLl53EF3qzM/74cD1P2CX9jhCynFOLoUUAog0CXoh5cFwrlOqkOYNf18dy2M0J4t27sDcsMoQIiecEU8/n7TuSte6DjVo/MkeD3jj4ei2ptgJtONri77jYyUNXe4ldNzf68xFN0zaB643B6XZRRyex0mm3hmHSCN+PAEeRmRIs/4bUZSD/fcoGlezcOwImfzzjoBxpNvrqjwXiDAwPpmR627ymvsbzSenFcx98gmtKOFkMeBzgyFsybnURbvn2gVl5VYSec5w2YrdeDh9P+VLPWTEB/RiQ546twVsBQ+trv099dJ6hejRWB9ZyBGiMz8ot6sDPk1nAsmwOnpqWKeUdSZ/I6V/OOwFE0sXBVlJzcybfo76uxzyB0+kFdbhBGxOmMuXAyk7e5/M/JEtgO7jm0U9FU76HbH2GLwYmAlRVDDLi82gcvWKrwwZ2lQSHezRsCi7efbbq2yn14RFs8iR8T7Zl8IAE33NvX/cZ1fVTBZCcrqJUPMWJxR1UHQXaAZVE9Hs+4L6DpgPiauzu9UvYAiUpAEh+/l3ZbsJVxh5KHiZACMHVARBBR6yKviUZR5I2FugS7U8pisYNH3pNPv63RlsIQvsLdP+uiMKvHrcHxEzdmr4Hl1BEWHZ4HEEkJS0GI2a+64JLq2DDjKjjkJeVBRLxT2/X3UdR7+4h2OAoq1qR89hWv7t4hBZ0gYEU2mAcswQ4tlH0f+d+z0T0sQKYBe9pMOKGLnADQ3YAYKjz/Xmk/S+Ea5/lSXjLBPLHc90/leHLGhHAwdYcNZucNQJTDYPLISOC2pL9z7BVGZXvAsqp2E2GdbAyYms6zK9Nv7RqipFz/Pv3t3oOnt2Uh9WEJlD9irgF0SOrbt/0BqxGB0ZP4ubmXlt57g9efURVBLMy+Ipmsx4b3H10z3loXrfAP9PhARdS7QK8NUc/+0VeINe3Hb8bAg0PB6wQatniq9J3GkydkMlv3Q4yPKXrJP0/ay6biX+PhMisf2KPMGrtBe98CUfm8Wo5Xb+b1GRiPz29E/tG/NgCFYKYOYjQHcVox+Irg1EymbF4uCZKDH/IkmkSqlF8Z39zK+h04kq61+AUao+vPw7OOBUjJw5AUBWKh9ov1ug6hFxNxuayojHTlEW7P/PR9XGa+EDY6hQ6jXAWQlNLd+cTIHhr9nmSBNSurHxM8xd0vcm2bxfY6UGpC4wN8Z4aBVdwG5ysfYNfEidj9Oqy6Ebl/6zbu+ukJJwiREvdxHLfB+3UmLAlwgF2b7iOzpXiyjPT9Z5+u+xwpQYqf90vBnVYCGMDYBdDZnodoyma0f42/u5rKm6LPoqYxxDOVbGSKCqGDQ1Xr5ypy66+uvIgf8KnQHdzOhXPzkLZ1lVM+eQbmwAEjPD/eiXmzxAPzVuJXhG6pfRl38gUe8Ahf2/OCRFqqPO/OA8kM9gXIMOV+UqP5HJDqGjaSPgDsFYsO6d0KIOdfrVaGXr8Glvq5TFm+VFWaFc9nMPoq8zXOA/6acjsEhvBHh8HmyNL7ksIRhigzb2FjV4cJkyuaoSyddPTt8ff6bWiRIrmW52rtmlRMFlDNBvJ3gEY5gCX0cqOp3DrsY7CxKMAXwcQ+HQGAkGJrZGyaB+vn5Z+Ki7tBHcFbwi4rApNO6O8F0cp//tHaUehdwg353cfooq+/siRdn2/J+f23186N19+lWW+B+fM9/nVXVoWkydyAi3D/XD+adLa6fryDWCSXlhQ/VUdmfub/nScjwzyZgxhx6C9YcOrM8UFqK4QEORs8X63K+uyQA1TSEdKDupPL2K+/BqG/AlOCjr641cNrvQs9SpIpTOycpINICpxhNC0CCrTKJZSB2UY+U/f2L3LguXHvxx1Ye/cOTohlyj5+bO+IpKgC4CJN4H2wiL84xrzbNwy1fhWIcA0h89K8wbE4oG41DovT8cwmUCuM3AEQdUPFqbu21wCsqLfn71dd8NW3/+3kF0IX6kaYCdlfAk0kOyDclHI8NicW99HtwftboVZPn38+OzmwJjDETGyuF0epwGu30K/xAMT305I6OttyuTlQn60jz8ZoXi+m4wqKfd+Riah+peXh7VEdpbtnWzALYoLZd1/IIRTNqsdipcIbHPcTLHg4j1n9kFi/aDlE56VWjdP751Bw15YSeoAaHcU1EKBf8r1+IXv93HfvuvYCUO9f2RWW08nRhBFT5Fx/8dTKd9AHF081bu/yrUgT+qw3KlYAQxQggfazVLgzFyzDzjxJOGuYsCgGLIHrDD/XcRQYL4qJb7f6mSMAY4u/MaN58E+nkyThRAAgQ9pEOMJIVixc/2ROsVU8eXkOe4z2cOYvsF2JLxwlTNzXwMI6v89dQUdpiIFbPHTpIEeGISibBNqFAvv6c1RvXBVqzg74D8Cp8I0AMvwKBQpoT6SAQAMwXm0XOlC4F0AxIvcip2hSLgYbKFP0V1aY1g/sTDw+plgAO23ECw5tx1fHUGFLAAgtgNICerqcqtFt6cxILUC5AOt5tr9vFwmRkLpqhjc2uyY3hX8RHvR/ybcGiGl1xmJmaqJqM/lR8leXBWcWdxqboJzp3Jn+i8WRE7hdd3q5IR/uM4CdmPll/HxR4feR60cL2R4orASDrtnPpVcBSl0G4gEnP7HzfhyM9Xo+dQgvFWAZLi2J97+LzLV0TyskYAIN2V4oauKfrWNUlHzgKP+TS8PeHkBLUM5PPt0wAAwItQY91u/nnxdpwsc5nw9igwVlDCP3HDn62B0HpKwuIoCtd8awHRh/sIZDhHNMmfyA9XLs8pV54v+toVXKCNVbrRQt4cHTyKTb+dlmGCCKxRqWBhb8W9jzoZszQLJxQAGszN59W0h+TonC0ePny3ajBNB4a6/NVE9CAbbhpF83hXjiK40Bmh4SoDw47LLfwroAkPCnBwD2p1APAGwdHRU4K+avh2bOSOrpCg0BbK6fWH247/VlCcByDWtwF/ZxUJAFaMHLmP6irUd7wF34yQV0cMDPNOm0+olq+HGPcjWAulVK59UJVqnfKCoOVGDIPyM0YtkrAQ1cDnMRzjC6osYDjrpNRWX0YiOoNmGDbUIUD1WFrBhmD6Tifmus60rT7Q2LtbdD/dop2WzcGQ1RSmvsFQjS7cWclXfO/QI+3YM9pQdw1d/QilcX7gAz+YFENyhZ2Sbu96h8K7vIKkkD7AuTUBS1F3Dq9TP/foEuhhbNwJk0A6AbRu6i4ZndgNYdpMKEtKOSYfUXAFuAeb0Q2NwNIGR6ppz0ldJ/R+gQCbvmuFTD32YYIQst6E85K3Z2mFhanDFvqFXwiUrrQ/682efokfoMni1sGhypFY8OKenr5NMwViW828gjC/Awb9yGvRPOfB3GAyckvyfrKj4JqQBKSrDXYIyLFSjvOyAvguBPTPl4+5RxSxFKe21oRKQYA+yD7OjMRrwkZqNeErpCX7/DUDDGfMbwNpmVwqtojdvATwgp8+AcIkPL3ForV1da41XokGZHAWVoJwHvD/ij24VAV7TV2Z/TcAb0lUs7DUd+q08keNw4HG/6oVP3MHkmhjRYCaxzGBwFVqM8ihyeJuEqNTBRD7Xg9OjPG/nFi/vwsDrCA2c4H6A3q7TOjyj4NxFYgktmWD2/RjzMnso0BkUcsgaQ/tHYdZHM0i3S8Wg8dYEG1BNUTIBRUefIUgDAgD1sLcC3Py9EE2zwL1CUIIsXTDhYuF5gFe3ubq9vlHPn4SOF5/Atmr5c9scB2e4EoE7kqe5Uat0CDj02wXxUwKFFpXhdPFiZ9PSx0lqUz9sZUkMt96dMbZLfIaVvfKcO0tlit23Or4nuAxn/bNtG2K810ORbXZgtbjE6QJ5h5HUjsC0nfjHhOXFX79omvzb72h7rVo9HxTm12TUElzU9eXtf7dOWGAWRAd21VXNw+mmQwTCMei6ccfQyJmAVDp+NBE7y+P4D/OIz+H/2ZtFgkQf0DJG/MkTpRxUCwAD+88UKce6EZNbUQ7aodaRhzCzasJaGHewPNGhAv2doWg1/2dqgc394RoKGSeucuQ6PPXiESolp7+A8GDUeTVyD4s2fHG5uePSDPOIxQWFnAsJSoQRphXfoXBVC2F8Us2UF8APhAxQv4aAAPtduLv/oZuEqgy17e9g2DtdAl5Q3oKItEP9K3okEmBHNMEjMVdBcgYgDECe0iv6zBQOo7oFGb/NJqAisXiDVAZgpHZhRvCQyr2JmPIaIH7FC4fqr4TzW9YQUgDttElWBq/F4KnETSaUe6T0V+jZkgDP7qNoTXD9+PI8nyoZf1TI8hpshbA+zpbTPFk2ftY2BxY/Ux3E7hG0MIcp0AeIoP//xqbfugSqTb8ORfZUJUCaw9crr/aLFF6m/W5jz5gUi+3q2KkdpbvMH8wgh6gYyUoktekMjJkJy6LrdQ2f+tk8kFU3gFYJNDIjS+7UzwrN2PxhvfHVnZRcOlkV89SgRpELFdIl/Ex4MQgArte7qevl9XBsykX/B7PiazL9NbjYAVImVPeRp6QCxU11sxAitG5khEw+Qn1B6LWCgIX8N1OuP/4F/RfBqmFwEh7tebhzMBNUhEfYGgp7D74mydT2BhbG+Ipn/Ous6B2xzCYqGY+vfI1ON+i9lYaUArElFACpntmhUfqxlKT4cigKA7QMYqEsLwCuoSjDZ7tTy1Je/xE65njgqYqy2dKPRU+Asr6DTiy7ywV6W0Tz0+7PTbNeNgRnWPn++2sOvzgvJzNgAvXuFiVDuC4m1ygKPjjl1WHdQegaXZLwSizRV+Mc+Euhhcf+oI+hcmEWnuvtXbDXqzPYSPNhWB9TjgcK4x6nPAHKhhMH5dVMJ5l0OQvSSSWd2DJSEljJ7FOhtWAGg80VAN2dVBEoNy3JmZBf3qLCDSfjlPsUdZh3jTdNIHuDEZW+lpx9kgLp5fI/auMQYCgkyQLbXzYG5u0d3YZQzKpkTo//X8zjweQCVZqtzdEOmP7QuysDRur2fMHEH5mczKY7jWCM7s2fJMvRSVSEsT2pi91dGWTf4eFJK2CtjpB6d+Q80Zqjk0cQkIzbCdqb1z5Qq/uWze3vDSvWwYMFoCDIAuPbzUa3fqgnOrSoXUCITmcTYaD7YgUUHuPtXpcSaSF+VElAteJt7/5Weawd5KVyR72jUV56BO4mV/Ztfgs4ARolAT106YK8BthSBhcMRmP2FkeZfPlflUPpDhZnIUJyoFGyhFbzL80JkAvlYFXLPO0cvw3Ff/jirXyPMfITezQ4FqvLt3EkcrAL7jMR+eCLa9Z9yK++jntew3vc5UN85NHPRRnKjhdXi/mokwVTcEfvy9+cUDVvCAjmINemrJsahtPqwVk1+t1jMvQDsTP9L5gvejWI0zWBg/eLelvLNkj8AzRWXMwnnu0XGoMDaXIDrBnxdX8uZSwh+MS7saQCr8jyJwOvriqm36HfleG5Yiw5g+Ux4B4DxcuYtAOAEiLy9Ryb/Xi+d+iH06/PG/mdZnbPckOU8X6ZpdSKb/WjmhCIIgpFkFMX78Lt04D7K1o8CHHQlQh9pFwLmAI0owM8PTHn9cFquBYc68vwcaDKx6hSz/SsdAo16K44pIvw6Va1ypkOkL/UXZatvBUoKDJtTZH8BlBHyuI6Mahgf6Hf5+saGM2sH/L1aXL7ydgJM+JwYLBy00PAHxIFFCJkRMQL+l35xMOizpRL9ClEgD3RhlnfqypDz4PSRAbN/Th8rYOvf+MQiZmqJp+qH03InNT6OTp4b/v06Jw5BayFNeLz4jzkGT9qA5UkGw8aYf3jO3zqQwk61udB+ZQwO7OMb2NfvPK7510/+egx9Pz7I39Mjeup4/dkGdr3dTfT3s/dH67ofv5Y/LewVhVGt97tfgnyoc3mDPqETdcCYX6gb54nrUZJerKBXP6gBWJB/prunDb3/8c3uPX8bRQqQfwdW7TyhBJ75WLAuEZ7RBr9cgpw88zjNK8wve91l5JJkcoRef32PjgdrwxxCt+vh2d5vACR+Xu8z+OTJilOFXgVOH6ZYl/l4qx9A+xlsjoW/9F34AZjEScFraFZpmQDWpOoN6DvAI10AlS3H84RKtdre6kSzzIP513vIvb3a93hwLKpP3wFcSXjVPh2wHhJm91a/cRl21UTHQ3/w9JW/pGvx6+m5XlPW+7XW1wv3uP/uznq7Pf9876q/hqv+8x8Jy91+KkKud46/K+vvaZCu5lv6+Tf9+uL0nxUDT4MIwS9G4V/3tfg5ffz1afLe9Vdrktw/TgYAZ8N9Vj6dyv4FL64vRe80UUjNm3U7xOggfr9/TezIkclfkM5KJvr65Tn6teD034gHy9604Fc98q3y3/PgXv9311UK5Gj7mrDgf74T53vZcoTHFKSShWRLPg8ZdHEQZ+cADA3PNCCA5k/c7dsJTTPdaWwzhK5hammtXH6e68YSaYLz82jfzCtk829w7dobM7sMOTSZBf6n9BD8+vYUh87g7gCLO1U9oiV0ez4z6XMRPkDDQc0NQPlCUcEfP8uVhk6U8I2ngI4y3uIfDMAzVPpY4pO1Ci3slFrFGT4dpz2HlZUTPmxnH/LHu9egjiw/T4Z+XHkYNXMoqfhZoycgqIAF5InOX1+BL9pAl8pni7echl4/4fTT6A3/espSAcOWW3Dx5R9Jcb6axj2BVbfvtyugJAbs7YvBBFv2CB7+7EMrCH0Huc8v1zxml2ce2HgNPYEE9CAco//UnjcXP0aFRPhryJeZptula7CfbTYAnobzGG6Om+HYcxfykTZxe8R4yR0qPMZr51o9qV8PxdddQhufzxuA68S38paW1zC5fModttjI/S+PJ53/k6+jTHN0a/cQyJOsb+sfZnxwEjdP/zMs++BEb00BJnYfTLaY5+4D4rkz2rsoyIp9rZHVwRhj3ZCMXikWMtkXOtKY7FMUdPd6sE/ib38SOxihxN06ZCIFX1xG75m9JvVPrdaZ2wzbouzpQld/lWxR2dFSOv2k7wrPKTwjC7/8kWdv6f9NOf0v9Z8r97v6AQcOxqx/FcudK3rbfPvOv62CvmEVl/3RhoB+/ve04T/tJHiw39hlwoq3n3ox8DrX+x8NeJVghdLv6/x3NSD5QgIXhdn0gKsYf5Agu2ndszk74oR8FKvPvzgg12l/r7I+/j9qZf6LdTUVYE3IfxdhffW2+/L/HU91sOavNebN//nTm79jnf7GVyx0GtkazL3AYfYQFo+P9nF//76FEMejp0u/odEv/3dth0s0b9brWd9C9Xcf/j+y2wqffC6k7lbvoFY7jIoGgLc2rtgTCC5H9bYCjfdsGds5HoWCpDMawaB2ogqf4/ljnz0uBtDP+bOarCt6VR/B/gQTohbQp0hd2nUVX6gPwwSLCuwgTGPrd+WFtELyc+YBq4/NFnGUWMPoWVOJ7YIEaJwCnF7P4ZFF4PNjq3kpJfzJ/DlzXLrLd41tVfz4ZfBDgbH1xd6C0U+T8Ryy8aEoijwaeYaEhjg7HvewA8KZ73B2BRjAKowib3YtmSqVN7zX1np2Krc9GvzYP5FFX2HXO/uspXbPvBU58roumMCD3EtO2ulcNOXHGTc229I9k83impzjYehkOPe9HGF9kilfoR8fxsgEYKYH6+gQisiOcVC59aHiND1OO3RD+eJPB5ryxOTwnt0APiP5C17h+jE+L9UTuuur55e2eD1wOrVlF7Y1uNB+HfszKUYTM6S+sgTbt3uM3AmtOYgU4qhje2NOjScAqDNAPNxJFaVQdgXLgW0c2kdxt4wGuk3Fr/xUgOefxjK09wNHUcFQ9kfxnGo8ZgywwPQibI5npUaxBhkODk0OPQQ2jK9pYmPJDyw3zlHC7AsH4F6TNGlAIsMHO/8oAHN8wtNDjGTa07UtYwv7enkedDHin89OI8H4vJ8+pmbBqcmH8U63u5U2zKGzG5jHKAYPzwAkk85xv2ka2J/5DKW9z2gHzAJkMsDCaIgczeM1/8qseD1y0iujWRLfvEL50yJO0SVAK1g/6QCUYT9hFBtVASY7q5XxaLqV4JmP51BhlKonFWDiQTjw7uBOfrnY4ImZnqDJ1BsAyQy/mWNThoUTymU/DzDO6RLQpyYWWl1A4saZMuxP12CeYciG3Gkkk5FTPGGkRx8TTvX7FtBJr+yLSnerp7pDuyzLXO6W8L5bLSzHxQYAKBkgYmRWWUTWw6w5GErtd+tMuiSnbGmgf2GyqBS2dFM1l92qdF5VSBVZf9IbG03hwYYdOaqvHAoPu3x+FJ0oXu60BjNVqNdlhIh3cumd2hoJPGjdALHxAd+EqIuZqbQHyFG6e7c3+7xDnAc+s6eyWLCv511qwtqtzmzR0c1mHPz5criR7Nx+bwcFmS1XYTIjb+DVauhScMjc+Kzl6R/nngJq1xTKxj0as6Mm8SsCa0Nqbx6/ckYE8CpTLBVZfCSDdqQ+UDG5295E/jZBrbEP6S6mZ6cQreZZ9sdDNENyW5gd8rEmn2IyCUDtUUo7rXBRsrJ+vczM58L3knVhVp66aBeeumsUT3H5p4EJMrjLa96BR1rc1wPe+L6fd0ueZw66JQ0fTlhvQl+/S77sgdLXsWXEFvYHFLqBLnG6xjNv1oRbC+uiYKat67Gv9kyKl56IwFUdmQ6X0ZJllyqc4qGCV2/S7Agch6Zpx3qfdX/wNe8dE7vPhLCV0UI6pZyEryJGzu5ezyRDNKWySESHC6Y02OyzFVfZp7xKgBO4T/I/cgIYGk9zLKqiKeojDQ7qnKHPIdJ0wGhaJmvOu7MD2MfM3iPYUYdss9yoq0WSyz0boC2e/xCudRRyzg3BwSTAJcc+AdKPLiIsrblHDdJ44IhEcwM9Bz1UQV20+edgGNdlGCb0RH5jZsweDcHNCF6HR7it8tMvC61W/g2jCcVyyo6e9rbn3oJZBpAkuKdT54rrDjzr/+id8BceN06d7xoS/nETJEVxnMTDEcvVnZ5UjWHohJRgYz0ejTs0QjCg3/33JRPhoi7DcDqrajwYH+LOaj44qQ9whmvlEzSmGueJM8tgsSbPBibj4XR3S+Chnwb9yTnwikHxq6H2OgV2T4hgRA5pkzzv35KEo6dbDZmsztkcR+BMHBZN5la+XtuJ7ipPyJz9mEmfvZu3SITOxcG/o5N43UXnk2/XR/vvY7IE97zpc924aFERMWcs9vNJ+3rJjkw22ECn1gWcS3ZWfUPiPqBMGscUVpktvlczhsXp5EVMNqPxG1j0FqoFQeo6uOLeT/x0yCT+jSxKvTtAWK6vJfc973L2L2kj8SOjWGF19xJgaAUS9w8g5YQDLKFbQW8tfmp7hbkwe6HzmLeDpzM7ClHGJAM8N0BiDSEOoIddWQc625nzPI+AAt6DQM+VWJmhhAR03HOf/yTHdAMLAWF9Lyyikr0hmpSasdZbXylK0x715Wd//KkJMehfIx6l6aRStfcp++vT2sLZXQGiuwaaCFjsDXZwQOHo4R5XP5vU4IgD2Df7g/eMdYCyCscj3rFXtxIJGTH7L/rjFVutEraaQiCErtCyU5LRgMqKDMvALSReBgM7FV0Vc8fq6zcS5gNhReqrNr3C9/8B71OLx1j7KqNNy7I5gYWgGgqzeAkrrrG56VtOiYHoxW6Pxx3+CBudpWFvlU+Jaqmru8+b66OyVb7Bd3oLoImVu8BCjFQBliwfXdZ96smeGe4iFRu9PKS4RTjpzMNM+/b+4gDj4KZGLrGoe+gSD0wKr7Bgr2rZ8cyy0wA79x7hjAOgMUWLD/cDx3EqLjqHFdLlNx8HTu2MRuXGPR1uZn4iGAjProQT1lEtRSOv9OQR0kyPxXIlFsCQPQDswpLQwCl6vkW6pDS37fak3FfqxOLiDZoEBPHhClvaiPG0XoL7lz7TOf7a0D7rcsRNi1zonAaog8mdHixJE6IThSpXbsY0jaGe0Icx+f7glrXVnF0O8OY4q9TTaLjyx/26T9FxHJdBgfqqXq2vfg7YEE7995XnQVV4Dtpiz4MhGjsIoN+3K2xgb4TQqz52B1cA7Vn2RZkzbreUu6aOr1MzBUOUk3qhs8f0QCv+Wd4RGAPPc9gLLpr5lmMApyBG+QLQSAf26SIU7/IIP69ZLS8AA+1rilfXtxtV1tvTpIPJRnAEENj+aceLoSfnfWzk0dOq58OvCd0qg0QXT3jszJ6qig9oLWh2mrItQDKAZUyYDm4UW9XQATFd+Jij14fA/yQ7+TATpZIV615We9pEWIL6FU7v4QPAT6EPTg/hvtkKUNI0ncHq7yqB8oR/5TgIQKg9KZpqsNNcWdwRDUAPikGZZNqDkzQK9vu90XC90wIo18QV3lENsO79/qYNWMsLfZKy2GyX0SXpxPFV2HL5DFQsC1+MKkzOACfEhGTiRaZZX24bQdnKmbPRdgAUEhOahyKswBVyaKXwfd9phVu/eMtOyqUFh4oLLM6+wFkwVMpcjt3V7dkaXwru9oLAQ28Twny2gPEA0JxDmwurEFilr5Ak3gMZCHvIhcTCnBuAjcXoO1LS+QPxncxoCBLMTUAYsrqabgaYAcM8buek9OKrp6Yr+XnSQl0KvbIoipJHioId93OKipNBtVANdg9EVSB6/XJmDX6EdxtHM7ntCuAyJuRMNH3XT6NPkHkO2AM2AKIIIehSxwAi++jBG8auncYrpf1MLk+f7GZ5VDg6ME6cDDQ2gas2QPRqKulVsNjDEiQIAIOzXRGeT+QLD2SyfxSCBU4mNptBXTHT19z3GkgRZT9Xmum4LM8FFTAoCiiNGT85x5/Kc3DZqu4DddvPHZ0cNiBo7qw0u5AZkVzd24Djkm4QOTEgCvfjMc5+3AgschFfiHjKaXNqlktth8Md1rGUsz1REQaxbP8VGUEeAO8cbrW/bbglRhHMjvIKYNa3wMdTjVTWM+Kfz4fIazzANvPps2Uy3hwx+3bG1Y+3wwto5KoCvfgnh9jO6BuATJ0M8asi69c+VgDiqEzIsmCrqVbNs6ycgEQTLsxwgTFPB+3RbBlgtzQiNd9Px9WQRpJqAE96xzCW9xXWVzma17vBSBNEGMfTbZqYbS+s+TjaS6pTJkK6Z0GPwjDbUWAeDWuHZvgDXeg9eTITE2gcgF3Vs5OqT7rlNZHrM3s2P9VgNZ8QUqk/MO+5l4zmWNSVVtkihO86bAS9Ku+XrNvQ1//d7xYnUIohUtiTZBKhvnE92P8B/57I+y84h0iEYtl6WZYF0YsfuG2/6JyIkZdnCpu3JAxstPOihVdHDgq0IEUb8e6MC28c9z3f4IHEodtnbgA1WGAlim4YhnJ/v5C4ax1VIDsgQiQA5oDBQUb/pztPD+tr5iae+hlo7S0N2HqEs9gFPzwJ4pnD6cCsvhRGD6BGocdxKZcTMHkwpVWdFkEN3WRurE74EEl/fUj85zWqt+PM0Vuk5mAe7TBsz0vMkHRmtmtwv5ZkmuY549j2AU7ZVzc42CWFidYgeZRWLG4jTPOWryQsfgKAjIweADrkwzlml4UulHk/O2QTwPYPn08b0ztyX45nBLTi4Wz30j7bPYyn3+A6+mpdAJpXnf016oZJXkWxmf0gqM5tQtjrGrY42MOyt+HchlNfxfHJVI8OoxRAENd0gHmvWxpdktlV2GciCWvRq69S4SoFgsZxx0SwkGgUCPUuw2x7p/3ujAU9+9yOQaqSgf05zBcAUGA1XwVkwD55q1u2DMWw3YJLBN4Z62E8JcqL9WJ3MCtnm8xIqH1HV/kEd/FfrhJlkVZjPcy4gm1S06OjmH6H/d3qUr4+gXaQXBEGNuGWwbwrBlbv7COuoTnggFkK80S7F4CC04fSAJbZ6UFxyu1RGBRJ7kwV1rGCMVMMs0eiiaR1uO3IlQ9LrZtROMlAtp4l+VAEDpdxcF4BithfGEZSNqwIW4KZJ4sn/77QvYDBfAagXR672GKAaGPks9R4hYk9BR7KxhokCTrzCHKK96oaPCXXC1gZGS0NRvnN0Ml2ScHM3EX/ys8VnDPaNqDxhMf0bG2/WhNokBK1cYpgTg6E4elcOXBMpwlgd58BaLETYQcj+o+uesCUXNXggzMLGthP6B78AMtCfEdQMBzf28+9lA9i3DeyR2gmNe19BMZaBdwZWuhfZQvo7bt2CLgiDEREeQzt3djAXDI4HTky7BYBpBMoJ3AiP29YV33WCxOPxmai/9Xed20njm3tPk1ftodyuCRnEEEScHOGkAQIlCPi6c+cC3Bhg1MVruref7u79zYyLKS1Zg7frFXHaW+unqJ6cy0nynM3Qcdgnq9Nk5OJ5GeYHo9SBp+GdOzoSnuX6pYl82HrDJzOzF3RFtJnYSMS5Ufwo3parzlnQWCapMcbUUlC7AzklwtaBFnGT6f7qAuOyQoOlbZqYARQVbC6ey2QluseyH995JNoDNgqEuhnR6RtQTbSFJ3jLcFLJfJJjE/fvU6ws2IEeqqHejLD4CW2lWKEWzz7XmjgINdOqLDn7ImFK4qivMbQGqoDkls9ko6SfFzjqUlzYvu17ThDPESVBIwUgkFHterA5bvdkSfFsVO/zBteyxofEvALCkmpF0DN0/1mPGifGvJCKxMNUHuuM10Oqm6fXYCnNfSOy2Eq80IkgH1Scn63XPfbRYnVxnoNCVVajKTpZhyHdM+dkdaqSqdeoXV25dLgn3ebiAaCEqfXB3up4NTu0NtNu/WVWW0da8tZNTguA+xUH53QOdRAe8ZsS/uksU2hsX8N/JmUAV5qCM6Y9Ap1wPrByBS9KuU11sIJGWLWyLbn+wIi4XHNIpBLce5EclurMRqTTwzaKbDgDMWt3Q2VbhKTeMQazZ5qvgJfGbFqV30q68ARbWtEXx5DVnQOXezU8E5VL9gTeKYoroYB1HgcLIauKFVITfpycUhoL0E42Z4PEiePSVSQn8079Y44onPsvtDziQr7afWZ0Qw0cBzxGMfM5xFtZiUoMOx8ID2t2J+w3Hk+2uTrUWuXMuBclEl0FLa8DdTBr1rHZUK14TsMi9TMLrSaEx+zw8RrLejWaJaIJTMup9OYY8x56+AE247aJh1Z6I/0GvszJolIkFGwVDBHZCqX9meU0Gzyth9FmCWQciCbH5imK2BNkXfcQ3WdtWtbpqVLe+e4ivhCtukltixp0blYEsgLLBsX0esn2AVFJJmaoRYzQHJRPJ6jtqk06ph3DvshCVjG5xqApo6zMroN7FjZjrFbik+IWCn2W+zlmtgsKKQRmd5bZY7NIqmR0TaWVoGHOY5PGecmUGKEFoWQAlmHAnFc5l17bpk4w8CZJhXmsJfsftPbARt4sQoW/UZBbQM2Qz2dkizFejEqsmJjAVMhWYOdwE90cF8C0/Jw7pXPisIU0Q9XwzraRamfAwH4pEMkD6heqwkHRbpLqvRquC9LuRNXWqrfK9P+OT+DENTuPGgLysjfMTEG2LRlWcShwcqN7YbRhLyFddryGjSNnHhmW008rY9YY6la2zmIUQTOXxsMVcMa1hYW2Pa0vOZbmTmaD1sYA1fZamF22ofp6TCDNbPn4x7rxogNB5wur+daGaqxBfp3gcUQ0/22M5U2qbE6Cujvrg0c2Z7PwuDoHnlnIynKYU3PV6CpgV7me63Wl7BOFO7dczBa6tqhgPVWtLg9oCs84ziUVrFhjTxSI4o9qNo6wQzLRMdeUEN3w4Mm5gXYcCfUAblrrxO3fgio6ak/HWmYRtyl5tqTNoWpRHuQexvdWGer9cDrr0FrTrXRTjki7nMTbLiY3SPoSaAUu6CNXephDnYsPgo4bTk8t5pKIFoPpu7SzeGuom3y0TETSzyacE5HNbT03bVT1NQYO0QPK5Hn8+piVLcrTNmwMZFR1Hml262Zflj2WNvYFjLNDxGqaj9hWLMHFgiqDex3q9XqhVbd1PRJBmzDgKWcGrPJdlgN+7Q13WYDIXdZXu+1hD7YWa147ee6Ffu8IOWDWXuS7HtbjNzi/mnzUMic5chv7iJaYpWjJCso1+jlGDwbIWJULF5g0hPywUz0+ExbS30FtRAPhKN7iO0wJ3aGlR2Bns95PLrsk6Ctjvj5NYdeu3m/RpNaT23FSids9Xks0OmG97fOIuqZfvdASaNqFXtiw3QyxM0jtD3aJWKt3FGGsxwwpq6HfHw0D93TWcRRhLm0S22gn+XHEO8hxtwTnpCA0b0m4n2t5OFkNpM9cdKr1wTwjbJiobQjxFYOAuxkx07A0WmSbzVvrxKm1wB/L1CNOgdEScaVZc7OzoEzRC2mS5FkiU6RkV1HbXRrG0zxRRpywzKjMPMcW1ms6yAco3Kw7Qx10dS7gu2jYZtistTUWocslmylzYhpI2j0S1mZkF5qBAmL0/pYPWXuBttk6CRuV0X0BC6V5x2Ccox7LchrOHlNU4XloDep6IhAb0jlYrCOT09PU/skj+bEDggiZYcnU8BpD+v7cIroDX+RYS69euOgmiBIRHtQP3CSjrFWTSF2g+qS8TjLRctSwbOp+Ywdgo085bJ+oHlbjcUk3bKHAM8lbRyWKlplojHAwp5x2tcsPpXR3kn1dr26Z6xRrkc+mV6E2iQazlRNQYmDTiDOzujXsSiqattrI205W2NWE497v0ut27tnLaoEIHYdeopxKlIVL4JuoUXUkwIX7H0Ze6JQj0wmBy6Uu2Ct+LMJIss2q5QVj03b7tUri8IctYfV+ZxlXJKkGyNibxNsARRWoKJIMez0HBPs9AnypP5sturGVpB5iqJoA4zBIVYW9WttK5x4AzXlBhPg6rAKe7GdcfmsyUvelMb0q9oY+VxaGdIBnt9ySHJD8yWToo/uEB0PPsQiUuoHk+DIiGy/UkYkUycI57tBsEgk9jXuW1PZHtozXtL7wKV2PI+RPoDIaZIQ61VqEUZsI2M0H3mYBo6HI39GTMJAHQjoEwqSGIPxuWeWg0AbipvGyoQ7ojEjgSxUqRRSoMxoa1twlroHWxPrqHsnrsyKYlAgNbJIa8JSm8/D9gQbp8F2B1Uw3iOmOnATVa76fmsf4BO2kb+PxUYGkwXp9ITTSq8VMz2AjUuDp13yQTTaucvRLBDArd4H56Ov1bGzMRBABzPYcDEXMNoczyMZz9CwwbG2IsEyMtLkiBkH9LeiJo4ejMMwzPeIG3HS8WgAgAJAqYHxwqTQ3AmiT2NsMgidRaul7Ze9LaYb0K9eELA4RKrwMHKoupavp1rEGKjxMKMZIDW3SG4ZpKCgun2rFFoBKJ9JZ4lSxM7tDt/ecrYTMMP+arUTpuAp9XrjfbU69OvbCZ6GyId8b7rfIWwU+HjWqN2uJWlJgaXtMeBzJ0zXWZxATbqN0ozAym6bAV3fFiLJUFIXny7ycIqvjPEk9NmwOrGXd2jvCB4sWvGZD94P+k0EN6bNYj7GzrUM0VammGcl+fIjdsyPppIv5rmijEZiTNNuSosiyxoYrbPOPpseg9Ug9APSi1YWfVeQIlg3zPyAMkYkVxbinVUNfy8N6tVxrGUsHYK+OYwRD2DPt2ZDbzZso33aa3eUfNQpEkmoot7tOUk8PQLD1MA3ni2OA1mxw3FD8zETj9WY9nzLGW2TbR7Aj+iBUbGuDO066I8CDlTm7faus6jXO4ycr6SdWmGOjUVd6RdSt+aHgjUA6x4bbF3e1rRlx5QrLT02baVlpCWXrNh+T7GchdelxLTkF7mJMI/VXrd7nDBic0tqQIntj5SE/UOI9BGB57sExdClxgNFAfoW+1E2X+UR+syoUc6nJKSY3wrHi0HLxtlk5WFYXYB+EsQTUkfsapatzqtFUoG39Cqg65xd2croF9m5I8l309i8czyKYlwXZbmpDP2jvLZkmkBeikcyOChAD/1grY7Y4SD8dVU5JYM9Zg5c3mQLoNhgP+QFoWQRXWTl7egF6UfPEFNYes6TLBKZlxKQuJZLEEH9hjOBsw6CSArLjlPZDf0dvQL3XdF8kHHbgF1FBVgYy0UlKn3fD3ugbtYoxHYHUr+QiRGP+V9Nb8J5boUMc+o4MmoceWG0Aj7dJpSVHRaIOwSKUcApwi0Mttj+qUpXvXQiI1T4MVmRIXMoKzWSzfV9Ut9Jx4y/zs+zmwW/UVojY9Br1Ja9PQbSIjT+woA3haXLp5J7imna+drC8Pa8SoaJBOESTgY7bSxQpsc1+KrzvpmIiLdALyXORNTw8dJbeoi1Loq2nSXpUEehWo3mm0s9aZLOlKKuzsD3DevobE9AAytglYsWtZGZYjFWrEWlsepFwUoEBwXF+4x0kTen9YlRBGYmFquIy5qOXujwdWln2Ixrk1LMq3M3aHn8VA37Djfqx9VKul6EYNzQPSAV8Gw4a9tHXFOcaUK6ouVIbPZy6ZQJwZmbKk8J1vSwwxwGfcm8ge1ti0V1uSADnGdBkCWNmrwfsPoAS/aEyqLPVNZw1+02FYN+pkDvu3PEQc16xzTp1hbeBNFvrF6xMIdqPifR63Ub4/zNmMRrJAqzyTZvtqa1XmLSW2pTzJMtIxSbNjveT3oFohxyctHmBURU4esMTpF2GlNsbG2OMjTctdZ5GsLRV6hQmclrTrKHNGd2mgRBjYVdTrEIgfZFuVOZTsu6pTskx7zOd9uqDGJOZA2sMYxcMO6iyqrfwDxsvsAsBnK4yoxZZlTfLLrhoFJfltgtna9FQWCwikyfsHoFhOChsy/8ShV0n7kjWDVITnmgMwWjd7ejymqJeeLVYhbS4NGK4FnocrrRltGqyDaFarb1lrWhUiwmQfw2LBPfJvI+UUiJCOJcH6JoWBs3F+ILjl4SaQzu2miy5+3Zvhys2xMRqNjNDuxiUXE7DJXWJymz6jZP8xPdy0wQkKct0v1zwnXIGHW2qFTdPrgEGskZD6V1/zBVacUVsigiEXicUbsyJWXGpvMJV6lS2qIqb1olU7F8kq4ppXw0i4UyA6sePA/jAg7Jemq1ORGRYsRpA5vm/zrhWklSo33Oy0WukXperMeNPdBZvUWleKDwg4EGagg23nRyimlhG9UCxUC5acRbbtCqE4RZSTBHusJ7UwlryP86F2cz/Q1T8Sd1fbSqN52DFaN8JMVT84O8XuorT8gbhwbRxpnnkwospHFJonjRLbbqtKHRG4ruTq3WfrpUguomX1B9ouqOE+9Y3dGCNWzv89nhgFIjxlg79qiqYK9YM21TVOrgVCD9H1211sRYJNyHSSM6Ozvv9/tzjBOvwXNTTuH4eF9aigCayOSDYqH2eKERdN3GWjpW7E2vM6HrDtZfJScayJ0dRis13UEkVBFUAoOxira0O2RFYwUyIV+RCX/rde9YSPM+/BJn+7zBsEOGNcI28R4yfzjEc1DL5cDHiu+W7CAoAMK2hTgFEtTmeQ7WEmSQkFAsSxNaxVzpcEjvKuA0L+tU9cAox+Z46Q66OyRLnRc7djbvVFNqTOlJRZo3D2uXzxZ6qw6aMShNf8m6nN3chkJUcRyJHYrrimKvErF1CPkMjF53FeSbyXh4Xt9r+n5QJx0akTbNmlsOEaInKmmmqmIv0XTvdimLdBk055KEFLhCr63IYjtfJa36gTeX9fUC6xaSlrRf1BCC/IQDsNtycm9nskOWRIOTUWFnGy3NMlUjse5qe3EAb7sEOT5Gj8NSZoGKuVdJHDXHHQ1x15vHynQzAslgsVgjKMInQbxitGXvTmXb18FllW3sAbQibhANd4SmJliJEYNkcTIcKxxi53KWgblczlxLNRXEASftxvl6nDR60WbWJQPCQg2tPyHtuQ4LFL3ScKo5objqei2Ma6seerhN6dDGUxpuLjUltQqN6LTDdHM8nussm/UM50HNvOZ2gfg01lDDyL/iHBYJyX0TDSypzoJe6pO9QfxIE8dSNXcFN7+Etq2+taL7Mk5ni9jO4WiBx10rknot1fsth2qdUDlkXW86i304mjns0brUG83BDHMMPeLomETnXZqgziyt9dr1SCbP93dObUGBW9VfYaqhygq7wF2bIpwPffQ6YNE/V6ZXGGmkz+cgHiJXmyuj9bN7h18Vr9a2Hhn56Jw/WitgC8uyntsilnDRWL83iNgdKxGvHtHriMdB5g6gjyHLS6NVnyTeHGzj3WR/nARg/E1jGsyFw8hKevXp4lJmlBStgnHZ3QjxTNSRkdtgt9GMH9E9z8B6vuGctZIYTYqO23VW0WmOujoEKTJld2cpzZHax5h2CtqQlwOcnNvVhFjpH9dTfru8VLz3wiMa+/EpSowfNHCadnNmYiCnSrBB7UzXtMN43mRMAm8zCyku6jjD6m53ZLwlz/NktkLJF7txo17NdvX6UUqG9WXO7o59mZclKfAwTbtU1seCoOaD7Ygpi0AYuFM1m/fTeW07TmluUKlilYdgASEVgVKX7XabFUhFbETyNBFWlmz5pXdIVqQrcTzyV0lU4ayRt6tsKm3hJMHA+81JdV3Ek/bvGUgtSbUp1IpI8Ow4WAz22POrIr5HOiUxYyHq8jw3Ds+lodGeoF4KCda5TJcdguAE0rdN5qBl8TRc5DY24WL/o+72J1N2nM9pwwQLNTqdg44RL3fe3m0pGtHlRIsRU2knmP7wSMBRd6wg9AbNTRz2gaXCYAsSZNAq+fbaO3bJmc/Q0O6pHiitVi0DYzXO17aNz4kRal7fr0SRn+xosGDGcCKlE/U2AZalNVdbjIrujdYyKRcBCOlMCPHwDMUvOdSN6ujo60tjoV86WlqFh8c+2fqWJwZmmfG0kPoRH7N9XuUFg0x+ysEaQJsK60tJPkxlzFETyALImgW7dDkzVsN5TBkYh9D9nlvzyDyncE8cRFdNSeahCyKJ+E0IuOPOsAqQtmfdvayDLhvIb+cJMepFKjDiiXMYx/Dg8vqvc99clvueV6SmKDmTziGQ7XgiyCNni9Zgr0als7niH9P5vlyOaPCiampq0WXJsFELqCuPi6KQtNY2iC5TXt/KFQohmIzibGJm82EdjjihsXCsNS7A090exhn49jqJ/HmzkIl2nWJwXAb0lBq16w6lGtWlj1lzon+M0wCNGXg8XLN2WAUYDyPVvIxKtTYZsiZOxir3ad9mepjCWBpgMqQEgWjZJ5WTInDM+LQ9CDFmY9HXjIQa5oiiksUnCbjlST4XLHAgobnvA41oCaN1VkNGVLvAqUFU23QGCa9qzQnplcYhwbjKjhOGGKcKqV61eoqD7k4gO9OcjEmdPHdgVZNuruwXExZPpS8BEfG5NGTcepYjtjSch7dD9MWp1QVJweyRo3o7Sp6vc9jJEVFHOfU8BYfgL7VUQgQrA/WarGWh8ax/ZR07+2gjnY050kUSb9M8bzaqlXGT9F9Um7UsjgSdlw+g49mI6Td3e8s8eurxR6Zs6wONe0eCs0dmbe8HW7c/l5NOv8/sVqnHANcwAtb0HaeZbZduZ1iNhqa/GjlcDM81JMgPOiPwE61vK/4O50UYA73X4JdY77+OCLKXDNYGNjXMbV0fKljJ7gq2F+KcjtIEUwFtbyBom+xC1pvXeAm9lCmOBXX1fr4UTr3g4Bsn/nTePZqygt46sRIJ2griJcpyVyFVHrKMw1RBg9kntCDit1YzsECFWbfO4jR3I2FsRXWHs66K03K93m7BYqISGKW/UoS5/zJT1tRPhhLq+65IVMVzJ95mwHCamE5SKwZCVWJ6pSBuRhMMfJeoQTDQ993ZfLFGMCKllJUpVW3r02a1MOctp0XzqnSKJ5zEDGoi/eybH9p1bC0a5Xo8J/QbTUM1OsxENMKzLMfQo3FA36y0MA2VzuN0BjLO1k5pqHFjOvf744gBjafM0PUGHsUq93jMrIabDQ78JYURaB2zUTMAVi8DxNie0hV/ipk72sb0ESUMa1s10OhP5KU2Xmgh/26KfA7ik+VUnA1Wne04RuEVgud0jqEY42f5VhwJSncBtqJzLG3EukrJZFd70Tt096XRwxGV3RGiicPbpzVfbx9N8CNyMklWwuiOpD931vVry2QjxnFTU4u2FnZdazIQ49JHT0Od1Y1Zqc/4kVPZjKsXfHosOAG3hwW9cqamwbMreTgS+yUDi26alW3fYmWmG8/UM34fwZTc6H0tCNDWBtkTervtYZE3c6kS1sbUgNRgntYS2ZwGHVgrIw05GeXMnq+OhS76lmAnH/eYmNBCe80KodYr6WWynxFJ1pyDRhVlxfd9Jj9wiW4M9VTfaFVtD0I3niZrMq08BYOuj/HFfowlm6cZpt6sG67yo24csa4HQdWwOQD0sCALcur60yO1GszljQCOah/x5RBf9EDi00sMJrjRNKVaWJFdX8xafBb7MSP2/EJSKk77hNOmtc8+F6vh3CesyhPSgGAY9WJ9tYrDLlY4YIBr35onEgE93h2CTBObB3te2Y1EzCNYypEzzZy9RMnPldGwu3nEg6VL50deQgibXpHq4rLT28VJxvfdkrdj9PUIklEIBhroc9M0refcVm82tZaLdBpfmu0Q+2266ZkrLVParRavj/fNymZTTw9cQNB9qaKgwFEh8+S68hwj3RsdjKF4ciqWbq5zFBogwWfAIc2qjNFAghMYPZOOGwfy4tAKsDIiOmoRbaIQaclUOUOnU3lGZDpZo26MuFDL4MhzsrebhPscI/sWPN2CkAWfZZnnjQMq7Sf0xlW63qgbDrpg4JhqB/TBtGhXwMpuVpPVqNAUwS7HTdRR1eSQ+OClw1rW4FRjvEUMqOY8lDCS42DAUKHF6ragyK1gC0Cvkgx1fzrBOs0h+vcjFVssD4NJY7bcl6Far9DRpAvOsp039+D5NXuzxqmersZNSFYaK8Zcci498MikCLE4ERdaBeNoN5274KUMYatJpcNg1ymnUUn0AloJqtqtbdzuTBW1kCoX6by/HRN00jmtiSIaC4ExaFUrm2Xt0KtydjuA5acMg0LJJXY62iD5OseYWhPng0oqckg1xzqdKbqB8gpkjJRhtdUhoFaG7JVL7IWQR9l8lVDrNXwxy3N7oFmq1pLRzMDPg6vF69M9KeJN4QBG4bg1QvLvwykLwnhfrRHcSMroNkn0RQS/QOBDSlgSRHEBB9+ie1ouu2jLe6PVyAenJ58DIyNQVwoW8/EczWopYBda660hinRzqBhmhnMCkFr6lyb7NCcZx91ZC7qSWa+M+yACtJDUkWAdCPqSQ309arfYRdJosKy/3TnonIKhopYIxrJoDuDBahVSmzkAIhLS83RxSc0t6vCMIGNZKWybk4wJYvcG67ueBbmxOMfHK/U6FtOCrFhlMWgxu6prB4IsutsSi+XUNmGxGgM8qkZJsywXkj0a+evMR3SniAa7XeyC17meHY+HbUtnSOxthWDV0bB1tEaIgd2LW94pt6KfOB3/XdktnB1Aug5atFAlea6ZrjNSqOxQnBQoeN142GKA+0k2wTtaeT83zCmbycSeDc5Zfd/ZHlhaWtcxy3DK0iDrx/QeHs1jBDCJDVkedvtupG9JLXmnWA7A26VqdVLp3hdFLtiJHFdR+kOzXy+O2pHfH5qTBibNNG0oH4qNZIzaWp6wJFTIY4CNTYFBdZ89HmeIoN5EosNO2wGcbYdaK7SwDYzWVPIWFvhQ1HQNmnClqGwM3giDZ9zOc2U0Qj9ItWsCyThlO9KeMO+C3UtqWAm+3QxlU3GW3rv6x1H8WI26UwwXpOrRG7BereFNhr7kEgTtKb3O3RK8HXZ67b4gSHt9s6hSvLmfkpnGWCUWHUDPJLslgxMlTF/AyV8ovEjv5ZAGm419RkzN+4zct0zeVmqbU1VTfHw+8eYIRwr06pWjeq62U9w+sTzxebMzRGyEKJgU1nmwM+dQ2YNMcYqirlYqjfqyVtl3qWXHncpTytCGolt01V6trGN30zBfdzpo9HRwliFaCaFaeEB/Aw8rUwy006zVJg6qlWIRYI8nqUPE/oaDStDqm8BxuWIG9O7A8aE0qlcxMoVY/14DxFgP83flsFVv1nTNXe7zjmCxCIWZgy/pk+prLIcea5U2RqfMQHOxn78YrcCIOKqhmswJ9pyieEdtiLUlwAFoJY78eTQdxKtMEmGXwQ7YM+W2ldSrm+PYqTZ2pOIaAxNgC4gzbHDCSoaByB4O27Q0E5yqvFRmlDTE4RpdtL5srUsJ1mqSs8dz5A75MJ1MCJxDbaMQrIFBhaYMd9Zlf0x4xYpwCxP75T7pN8qppYCIXLMn4xrLFtmN5haIQrovZ6EAlifB5kfeLc5BFFAlJCCPMPwHbo+4SrOAJ1lf15n0TS5uHBoz8iXVU6QrSAlCUr0ytNupaLCgXtIYK7vsjKHp8rjDOsR15lpuQgedZq08Wse8Jw47aYYcQpANwf6OIxY7AYguO1nfOT3dTTYM4m7FCLKGO+2OgHUlPpbD0zzMZL8kSAtxuN+B3gtG9e2CIErHbZVuTrpUALLi5Am7YjCagU5KZ95xmWPAButApF6jhgoxnLvJkpPtMuBGze2U9XkatBLKhpXknKze4V6v14BWzQa9X+qg2BEV0oqtOR2Vxq63RKjCuQUmI73GamEMLqBrP2/vJrHHZSOCK05m7W4M2GFxmZ50kyZg+jfCvrII3I95P2W3Y4J8cAhPUbU42NerBRdgozJOxqiaOHOUidAzMcxI7dmwKG+DadSnGN1eE7iZCcJrVwgqIonn0xGqSzLR+yofhnjje95kJOyB6nr0Dnx3pW7kzp6Q2q7eIrMu5tURotRNtZqTmGy3pFLwhwU8sZR0GqPWrtfrJ0t9OWBWHaz62m8J9h4Y1OkIq4+oxEHTDmHujjL4xtsDWqSqGxpeNJWSkVjqc+S6asXWB4UnUZO900/n43O7ZA21DE4GJ3NodlrKpDM7HFe02k75KYyaIvXtS2/lRun7u12eqY1tx+mz/UotwmxJMcD8drXDUGG8OaEUVrsTlW/E++5ms/mLrZN/q45nbOxqEFt2jBcYliI/8Je/sK6aCo3Y9lPyJ4Y+XcrtOLUPV5fYxl9szTu07MCzUywHps5/FU4fKM/vlZ7QbsArhWOl29NVWTxd2trOZnv+IlF8QhcSLxvJ6dLmeXH0eE5fCb94h5rtupc7IL8zlGOdPrPtpismm+Jgvd08dVy7tfh/f/PnhzDczD69jXq63EOSlu75IqzkhAm8qBZbJ7WnoWHiX4rYCOHaNvXgS+s0/GokoW3ifa+dg23hBdfZ+PDahH3DTa3ifjmm4VbOf/Acy8Kvqa4DP506R1xXxJeO69YCNzgfhMXbksXB9SQFPrKv/iIxK/DQHnREEvXEvTwnmXoSJfnq5+bMaOH5M9fHdnX54ccm3hzbX4zgpuddfHF2QpQFlz/8nZD9rcAbaCo8kD27/B1+25z//1cXoszn0/lxEVhJloGV3vxC55duexhc1oENdV6vjaKSPM8Hl1+RPVBN+jHNXxEuTb1H4Y8gUFl4khnq+Yd+Qaus9AT0evUj3CHWO5RKwce+iVJhR+6QquHhHpIzmDy//C1nQN54Xp05v74SJmci/Q1nJfNP8D/vyhX5iZLunBb9RDHfdVrsHX1wufH/9AHhMfaJ5a9YjL7HY79dIdDcfxrh1SoDO90avv1/RC2Ajci8oFOBAeFzQ5rsD6vzmjSlJ+HbRMo9E/M/kXJ9VLzwJMj/NBOTv3NE1sa+6M4gTrfBJvANt/HjajUOMt/CQ6kjxf94Tz8IwvPx7ew0Lc+HYGRp8PJw7YOTzq9+X+BST/z5Vf1wXpm8KC8vfHje+fWLq0/hyx8fI68un3tNDVeWgB/49j1CeHYHbd+qxHFQINm5RpI45mzr+Kc/NB338jhv0ksSZLFpv8c253NNjXhjp++d09nAwsN5l/5i2zVSJ7df3Mc9sjl/VAkcIu/PdMtLzBP/kmxBlohXVCu9clJPt35e5gc1wr4h1OTz20J8Q/L2FwsXE/Ii2KTr1T58+1ksvvn2V275y7fDL6fb/cFHzzv6CyJRuKOsHyYOX5H1d6odXnxJEUTnfGAbUXfEGPVNMoyT/y0y7OrQhK/Lop+Tf98uwy5vvJZhQlLlOquJw20axQYE0mCWxH/TF0J5sBD7VVEjfU3U0LT4/cLjmUuvhAdfxXPi639AiiSwpONvqkGaBt6ZsM7X+vY6PS9/vjI58/y7ZPXz4kd6JX5uzd/fKn2YW2f65oCSrRGSEzFKQpy4V3s7NbcXuQB8eBFXSeDi91WTH3KCfMaOG7mNdH3a2DORw/I8oYmXdu5ZmsQ2rGKsyLoUEXqpkV69Du3YgV0gYfCTXa38uFJNfCOcBSduuHF27kZVzqLqbTIi0k0/n9SDAjC88MoUEJ9k+trO5m8IhLtDINy3EYh8QyA1I326IZIv+5vyR07Q1dkI5OdBGy69NKYk8Ym/NQEu617vsfzsrD58l9lbgTnI3P+hXea5J+lzu0x/2x4zN3vcLlcxfIahlNi2HDN1Av/Xt5zm/tCeCwzzRMsvtp291S4088Tyd7Zd+qZt5+8G10kMy3Lyz0TPpPBwGzmr9adXoTCy0ssA2a8sPkBUmtMyOGb+J1d56/5eEdgPW57+mM7eDvtcUaD0TtzoOrdAPYTohFcuNyM9UdJ1gudWg7HPbvo1CTLid5Ege2t0/jNdLMtIts+kgC8UIwVzxidXGIq+c84v5QlaDMSKeWms0D8VSfxZp+1NsvrQG7sIiw8jSpeI7Z92xgTxpfX2QSDn1dsF/hVtX/litx8WuadrtnqZquPp9wNdp62/CXTdfIv4al1ReuLZ27jvB/GzRzmSPP9v4dyf5ZV/Fsf/Bs79n2DcMxt8knGl9xhXeMW4/KtkyacZl5VeyRZwV7gf67K/l3E/Ez7+Eds7e/wvmOGKIW/5gr1P8B9mSy6s/YOdF9fc/AFr/+DmxUU8/IwafEX7X1B3H7PhfWa6sr34O9GDy7VfTL8I0kvvQ2Lf1R2fpezXy4rvL/vNlC3c+jMVHyQNXGa9Gxr/svf4biDzjuiX5Qc5j5idfxUZ4e/u87X/Lt2z478tWCncRkmet/42E/wv2nr+nRosRnziuSvtcSmh+IPHcBtIeT6GWyn/P3IMPPUk0dfHcOvV/uZjuNTI3AmshDgE4uOohfxGVRIjS7JEyi2fAxhkxUeVGn0YlLzY9+xfdxMDL+3X88VXlFGrUVSt9sIbIPGTrWERi4NYzNdZjAdQkMhzT9zbxZRg6FHvilPuHvkI31dJKdwW5G0vZ55kq8+QEM28QUPFlyvy3lqJ+rgSjtzsI8Jrrzym9XrNmOY9j8kSVgKPHtMmNizHfkF6zWatRkjvOijMvCJt+pZkm02JelRdL8+/dFhoSnx6kVa6pT72Xt6R/y7Su624exjp5f8TpCeZNiG9uwSGJHaPLBsVofo6GvwZ0pNl6ttID4O918XIFPfbSK/VPcaNip9XXX/Y8/6uD5Zs9W/xTnHC13IGb9LLwwivb5R2/DHx/Xza4/sf4dks/9JDvMlB1Jc5qFZr8M3mPS4RTMlere9wyeerdM/FInDp79e1UNdZvfPbZiTwyP31uuyEeQzHca9dhlvblGefROaecYrexa8zmrmbOvu1JTDOWNb1ZN4qB97dBrAHhoCYv+6FgF5SzCuj8y3vYZd54VVlCNzV+RV3S1aWYUvruzaBLNdqhODePNIPg56/J07DCfzTRQpf4p/0k3BtH7wSvp8N1XAi88WVHxetuUuDl56TP5NAuIQSv5RAeA4rfrlC/E5VE/4QmvxkIeXbRP2SKd4k8esI5f0TORPAdczz7hs/zTSf5oZfkmZ3ekTM537Oz5qs7BP/hrqkv8Om/JzmYwmNvRJzVb5GkzU+G4rh6vgPfn8QO0f4m+GeKRTjA3Untk/1NWzdTVG1hsBXtcCHWzSA9ydY23f683Op3+klLeGFm9rPh6hN5ulVmRIjvsjUUJR4o0kvFTbXcplln75Lid6pw/8/QXa/SF039HJDUd9DQPItXMLvJpivmV0vlcE7rYE/Mmfidersou4+TJ0xL3JnP/TYfa32rh660lcP00p3WpneVQP/FFNOpp4k/iasc4n68PfF2VfNOp5771v+ZrmnF6Glb0vI3T+Re3Hw7yP493PFP0e3b7LJT6eYH2eGfbrt5TcRPCM9sa9QauhX9PeTVM6x9BN/Xbn40pGROHCh/xyV/5k2sX+VU/IzbtebHPSxf8LdZYxPK5tfLXm6wxrv0i9NUe8S8KfZhKdvrJ5XOCHCp1jj7ZqtRzMPQ/0R5vkzfeL/hCjAdzd6czzzdKk1+BFneoF5wvwkdQvcF1f+ZsF/p5cysd31fy7gRy7gt3h87L8gZHCxf/8LvP8LAu9ovH5T5P2jpb9bcP1rum7+U7o/q3SRxL5J63609HdT721t1H9q98+pXfaPBlr7i17jkGhrajZzGD5I5jt1+fdjykgeU4Dx5ipJaPj/lHv5fFXXzRJj8sdnfrnilucSrRvgvPcfvTgTCX6nH8Seged9WvFTd6RvDdwSB+ue0y2W26ZBShYxPOBY/FuwvmZxcjc3xS+fhfv7zU/3qIOnvAvAIWV7Dqi+gBSKO/7zrm1Ahm3JzTU/UWz3+U2899YvbuJnN+nXT/cx/PWYVa7umXrvUT7XGP/hzn0K2/JRBW8/26L/dQyIHz7SRZVa9trIyJd9K8jmSxvqTu89fan3egEfIz2xD0BXvqsn73mjXzs6+lG1im8tpDlJhiLyG8nv259hYoexnQDlnKyvXyq7/Dq9f6VSRKLwn7/ulFs+gAFYinkeDnDJT1IvujIoir4N0UjgmN8BL734JA9ninuwW/8wgpqB8PoqHf3DHuE/njjzBPckXKfvX/KHeBeFWaKfhDsozOwD2tnucsStv10L4PXf8N8sNvxkDWasHScPOBfuC+fySHwmDo7hVWvErXam5Sf2dtcfgc7EtJVa1rFqfmOX2tZ2tD4m0p3ZKj9jVz2Eif/neyG++hBvBoL+x3shbjjqDt+9zWQi9cS8RMJguCfuBSDVbQPSN7dH3GU94eaE/6tMf1h4/KYo44MaqbsndBbW15Ugd993giz8DYXp793lAwqEb+TWtcT6R8Wo/9FF6b8ov5inS9PKRX4JH8qv7wt93yW524rLVyT3Dyef31Rc/mhCQIiW35n2uHv2t3jAt3rsf7q8/CGqRfqkahHv08jvKVfgBeqJvkYAekGN9Guz6pWP8ml4J/q9b/mb5Z/4F8gFnysu/Ik88H3dfg++9vvo/R9YXf5bTSnpT9I7BwJWeClysQqBe4Gr+XNEzlPU6wy2xD29vsXvpuU/My7pX+U+/HwN+W/VC6chiI8v8ZFvyBTLxl8v9GmqZ+gbG4Z7Eq9RkMRPccC3lIq/48s9qO7n3+NQfU/Rzy8ZwDz1L/CE7gwj/Y9c/hC5sH/aX7oLN8T+epzvVZX07x/39RpM9zphdat/PzBH3x4SdltU/rO2w6cU8N0xXwJ7q4F/rRH4V1XyZbTPRQa+SsA9aIIhK7+aQ8S9q9nff/tfvzpV7D4nSf9x0r+Jk5jPctL3DNb7kJPE7+Ik5qUBTfPvshJ3M6HvnakQDzNzb0O448xOTmUSVMVPCjvGnN5rhvt6tl26pfhKsyIQ1+vNMP5N8vElfuGvWwsCJbyGu6Y55om9Y1CKd+wF5tvsydvwat1O7UeNprpzGLJYbcrSzRZ/5XAecB6Ijkuxr86Du3cekvBc3/XiRB5Ql3L3RC7xvqsTUYLAfQhzSO9t8/uVKL9mKzOvolzg47O3xSh3YTmZh0zBu7/Vt87U/9xWf8KF/e3bflsENMrSMEsfIHC+kqx95LaL3KsxRvLTZVj89U7fA9z++jbDyzhAx/2HhsauhkFg2fiO/w8=</diagram></mxfile>
2111.14792/paper_text/intro_method.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Figures and charts play a major role in modern communication, help to convey messages by curating data into an easily comprehensible visual form, highlighting the trends and outliers. However, despite tremendous practical importance, chart comprehension has received little attention in the computer vision community. Documents ubiquitously contain a variety of plots. Using computer vision to parse these visualizations can enable extraction of information that cannot be gleaned solely from a document's text. Recently, with the rise of multimodal learning methods, e.g., [ 4 , 6 ,18 ,21 ,23 ,25 ,26 ,30], interest in chart understanding has increased [ 5 , 13 –15 , 20 , 27].
4
+
5
+ Studies on figure understanding (e.g., [15 ,20]), commonly involve answering questions, a task known as Chart Question Answering (CQA). This task is closely related to Visual Question Answering (VQA), which is usually applied on natural
6
+
7
+ <sup>†</sup>Part of this research was conducted at IBM Research AI, Israel.
8
+
9
+ ![](_page_1_Figure_2.jpeg)
10
+
11
+ Fig. 1: Interactions marked on a sample from the PlotQA dataset [20], along-side with our CRCT prediction. We highlight the interacting parts/tokens with matching colors. Note the complexity of attention between the different modalities needed to correctly answer the question. The result predicted by CRCT and the ground truth answer are indicated by green and purple arrows.
12
+
13
+ images [2,6,26,30]. VQA is typically treated as a classification task, where the answer is a category, e.g., [1,2,19,30]. In contrast, answering questions about charts often requires regression. Furthermore, a small local change in a natural image typically has limited effect on the visual recognition outcome, while in a chart, the impact might be extensive. Previous works have demonstrated that standard VQA methods perform poorly on CQA benchmarks [13,20]. A chart comprehension model must consider the interactions between the question and the various chart elements in order to provide correct answers. The complexity of such interactions is demonstrated in Fig. 1. For example, failing to correctly associate a line with the correct legend text would yield an erroneous answer.
14
+
15
+ Several previous CQA studies suggest a new dataset along with a new processing model, e.g., [5,13,15,20]. CQA datasets differ in several ways: (1) type and diversity of figures, (2) type and diversity of questions, (3) types of answers (e.g., discrete or continuous). While previous methods have recently reached a saturation level on some datasets, e.g., 94.9% on FigureQA [15], 92.2% on LEAF-QA++ [27], and 97.5% on DVQA [13], Methani et~al. [20] attribute this to the limitations of these datasets. Hence, they propose a new dataset (PlotQA-D), which is the largest and the most diverse dataset to date, with an order of magnitude more images/figures and $\times 4,000$ different answers. PlotQA-D further contains more challenging and realistic reasoning and data retrieval tasks, with a new model (PlotQA-M) achieving 22.5% accuracy on this dataset, while human performance reached 80.47% [20].
16
+
17
+ In this paper we further explore the cause behind the saturation of various methods on previous data sets. We argue that similarly to early stages of VQA [8], several common datasets and benchmarks suffer from bias, oversimplicity and classification oriented Q&A, allowing some methods to surpass human performance [14,27]. Next, we introduce a novel method called Classification - Regression Chart Transformer (CRCT) for CQA. We start with parsing
18
+
19
+ ![](_page_2_Figure_2.jpeg)
20
+
21
+ Fig. 2: Examples of object annotations in train images.
22
+
23
+ the chart with a detector that extracts all of its textual and visual elements, which are then passed, along with the question text, to a dual branch transformer for bimodal learning. Our model features the following novelties: 1) In contrast to previous methods that encode only the question, our language model jointly processes all textual elements in the chart, allowing inter and intra relations between all textual and visual elements. 2) We show high generalization by dropping the common 'string matching' practice (replacing question tokens with certain textual chart elements), and accommodating a co-transformer with pretrained BERT [7]. 3) We introduce a new chart element representation learning, fusing multiple inputs from different domains. 4) Finally, a new hybrid prediction head is suggested, allowing unification of classification and regression into a single model. By jointly optimizing our model end-to-end for all types of questions, we further leverage the multi-task learning regime [31].
24
+
25
+ We test our model on the challenging and more realistic dataset of PlotQA-D, as well as on FigureQA. Our results show that CRCT outperforms the previous method by a large margin on PlotQA-D (76.94% vs. 53.96% total accuracy), capable of matching previous results with 10% of the training data. We further analyze our model via explainability visualizations, revealing its limitations as well as strong capabilities.
26
+
27
+ # Method
28
+
29
+ We present an overview of our CRCT architecture for CQA in Fig. 3. In our approach, the image is first parsed by a trained object detector (see object classes in Fig. 2). The output of the parsing stage are object classes, positions (bounding boxes), and visual features. All of the above are projected into a single representation per visual element, then stacked to form the visual sequence. Similarly, each textual element is represented by fusing its text tokens, positional encoding and class. Together with the question text tokens, we obtain the text sequence. The two sequences are fed in parallel to a bimodal co-attention-transformer (cotransformer). The output of the co-transformer are pooled visual and textual representations that are then fused by Hadamard product and concatenation, and fed into our unified classification-regression head. In the next sections we describe the train and test configurations in detail.
30
+
31
+ Visual Encoding: The visual branch encodes all the visual elements in the chart, e.g., line segments or legend markers. For visual encoding we train a Mask-RCNN [9] with a ResNet-50 [10] backbone. Object representations are then extracted from the penultimate layer in the classification branch. In our detection scheme objects are textual elements (e.g., title, xlabel) as well as visual elements (e.g., plot segment) as shown in Fig. 2. We create a single representation per visual element by a learnable block as shown in Fig. 4a. This block takes as input the 4D vector describing the bounding box (normalized top-left and bottomright coordinates), the class label and the object representation produced by the detector (encapsulating e.g., the line direction), and projects them to an embedding space (1024D).
32
+
33
+ ![](_page_6_Figure_2.jpeg)
34
+
35
+ Fig. 3: Our Classification - Regression Chart Transformer (CRCT) network architecture consists of two stages of detection and question answering. The detection stage (left) provides bounding boxes and object representations of the visual and textual elements (see Fig. 2). These features, along with the question text, enable the co-transformers in the second stage (right) to fuse both visual and textual information into a pooled tuple of two single feature vectors $\{\mathbf{h_{v_0}}, \mathbf{h_{w_0}}\}$ . Next, our hybrid prediction head containing two different MLPs, outputs a classification score and a regression result. $\mathbf{co_i}/\mathbf{self_i}$ : $\mathbf{co}/\mathbf{self}$ attention.
36
+
37
+ ![](_page_6_Figure_4.jpeg)
38
+
39
+ - (a) Visual Representation.
40
+ - (b) Textual Representation (per token).
41
+
42
+ **Fig. 4:** Chart element representations. The relevant information for representing each type of element is summed into a single vector.
43
+
44
+ Object colors are generally encoded in the representation output from the detector. However the actual colors are often important for linking the legend marker to the legend label (text), allowing the connection between the question and the target line or bar in the chart. Our observation shows that training the detector with decomposition of graphs to colors, boosts the performance. Finally, our visual element representations form a sequence, is denoted by $v_1, ..., v_k$ . We further add the global plot representation ( $v_0$ ) as [CLS] token.
45
+
46
+ **Text Encoding:** Raw text is handled with a pretrained BERT [7]. The textual features are derived from the question and the text contained within the chart, such as the axes labels, legends and title. In contrast to VQA where the lingual part includes only the question, in CQA there are additional text elements that are essential for chart comprehension. Text position in the chart carries important information. In this study, we encode the textual elements in a concatenated version, separated with the special [SEP] token, followed by the question and an answer with the special token [CLS] on top $(t_0)$ . In contrast to previous work [13,15,18,27,30], where only the question (or question + answer) was encoded, here the text encoder is generalized to include all textual elements enriched with their spatial location and class. This approach allows free data-
47
+
48
+ driven interaction between different visual and textual elements, e.g., the legend marker and its corresponding text, as well as interactions between text subelements, e.g., the answer and part of the Y-axis label or title. To this end, we create a new representation from all the textual elements in the chart by fusing the word embedding, the positional encoding, the text location in the chart and the text class embedding. This fusion is carried out through a MLP layer, including projection and summation as shown in Fig. 4b.
49
+
50
+ For multi-modal interaction we rely on the co-attention architecture that was first suggested for machine translation in [3]. This model contains two different sequence to sequence branches: visual and textual, as shown in Fig. 3. The information in the two streams is fused through a set of attention block exchanges, called co-attention. We use a transformer with 6 blocks of two encoders with co- and self- attention. Each encoder computes a query Q, key K, and value V matrices, followed by feed-forward layer, skip connections and normalization [28]. In order to exchange the information between the modalities, the co-transformer's keys and values at each stream are mutually exchanged resulting a cross-modality attention. Finally, the resulting {hv<sup>0</sup> , hw<sup>0</sup> } pooling tokens (indicated by [CLS] special token) are forwarded to the classification and regression heads (see Fig. 3). For more details, see suppl. material.
51
+
52
+ Similar to previous work [5, 13, 14, 20, 27] and in order to allow fair comparison, we use an oracle to recognize the extracted text elements. The oracle is a perfect text recognition machine, and is used to disentangle the impact of OCR accuracy. Previous work frequently assume a perfect text detector, e.g., [13,15,20,27]. In this work however, we explicitly account for inaccuracies in the detector by considering only text elements from the oracle with IoU > 0.5. We then create the set of possible answers for classification, composed of in-vocabulary (e.g., Yes / No) and out-of-vocabulary (OOV) answers (e.g., the title or specific legend label). OOV additional classes (dynamically added) allow dealing with chart specific answers that has not been seen during training. To predict the correct answer, we train the model with binary cross-entropy loss. To this end, we concatenate the answer to the question in the textual branch, pass it through the model and evaluate a score in [0, 1] range (see Fig. 3). This score indicates the model's certainty whether the answer is aligned with the question (correct) or not (wrong).
53
+
54
+ Previous works frequently use only a classification head, overlooking regression [5, 13, 15, 27], or use a totally separate pipeline for the regression task [20]. In classification based methods, the answers are restricted to discrete values, that are part of the numeric values appearing on the chart. This approach strongly limits the generalization, lacking the capability to predict unseen numeric values or charts with unseen ranges. In this work, we propose a novel hybrid prediction head allowing unified classification-regression. To this end, we add a regression soft decision flag ⟨R⟩ as an answer class, followed by a regressor. During training the model learns which type of questions require regression by choosing the ⟨R⟩ class as the correct answer. A separate and consequent regression is then applied to generate the answer (see Fig. 3). Note that during training, the loss changes dynamically from BCE loss for classification and L1 loss for regression, so the network is jointly optimized for classification and regression. During train, we vanish the regression loss when the correct class is not ⟨R⟩. The hybrid prediction allows joint training on all types of Q&As, leveraging multi-task learning.
2112.02889/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2112.02889/paper_text/intro_method.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In medical applications of computer vision, high-quality annotated data is scarce and expensive to acquire, as manually labeling samples typically requires trained physicians [@deep_medicine_challenges]. Therefore, the requirement for large labeled datasets can become quite problematic and may limit the applications of deep learning in this field.
4
+
5
+ One approach to overcome this problem is to utilize radiological reports that are paired with medical images. Such reports are produced routinely in clinical practice and are typically written by medical experts (e.g. radiologists). They thus provide a valuable source of semantic information that is available with little additional costs. Rule-based Natural Language Processing (NLP) models like CheXpert [@chexpert] extract labels from these reports allowing the automatic creation of large datasets but also have some significant limitations. Most importantly, such approaches are typically limited to classification tasks. They generate overall labels for reports (and therefore the paired images) but relating these labels to specific image regions is nontrivial so that they cannot be used for localized tasks like semantic segmentation or object detection. Also, rule-based NLP models have to be manually created and cannot generalize to different classification tasks or even different report writing styles [@chexpert].
6
+
7
+ Instead of using these reports to generate classification labels, the reports can be utilized directly in the pre-training method, as was first proposed in the ConVIRT method [@ConVIRT]. Here, the semantic information contained in the reports is used as weak supervision to pre-train image models that are then fine-tuned on labeled downstream tasks, where results can be improved or the number of labeled samples can be reduced. We argue that while this approach is quite promising it is not designed for localized downstream tasks. For example, ConVIRT [@ConVIRT] only works on per-sample image representations and does not explicitly provide more localized representations that might be beneficial for localized tasks like semantic segmentation and object detection. In this work, we therefore study how pre-training methods perform on localized tasks and develop a novel pre-training method designed for localized tasks.
8
+
9
+ Our contributions are as follows:
10
+
11
+ - We propose a local contrastive loss allowing to align local representations of sentences or image regions while encouraging spatial smoothness and sensitivity.
12
+
13
+ - We split each report into sentences and each image into regions (i.e. patches), compute representations for sentences and regions and align them using an attention mechanism and our proposed local contrastive loss.
14
+
15
+ - We compute global (i.e. per-image and per-report) representations using attention-pooling on the region and sentence representations, and then use a global contrastive loss to align them.
16
+
17
+ - We propose ***Lo**calized representation learning from **V**ision and **T**ext (LoVT)*, a pre-training method that extends ConVIRT [@ConVIRT] using our proposed ideas and outperforms it on most localized downstream tasks.
18
+
19
+ - We propose a downstream evaluation framework with 18 localized tasks on chest X-rays, including object detection and semantic segmentation on five public datasets. To our best knowledge this is the first localized evaluation framework for pre-training methods in medical imaging and allows to compare the performance of pre-training methods on localized tasks with medical data, specifically chest X-rays.[^1]
20
+
21
+ - We conduct a comparative study of pre-training methods trained on MIMIC-CXR [@MIMIC-CXR-2; @MIMIC-CXR; @MIMIC-CXR-JPG; @PhysioNet] and evaluated on our framework in more than 1200 evaluation runs. We found that while image-only self-supervised methods like BYOL [@BYOL] provide a strong baseline and outperform in-domain transfer from CheXpert [@chexpert] classification, self-supervised methods with paired text outperform image-only methods on most tasks and require 30% of the pre-training samples to achieve similar results on many tasks. Our method LoVT proves as the most successful method.
22
+
23
+ # Method
24
+
25
+ <figure id="fig:report_example" data-latex-placement="t">
26
+ <figure>
27
+
28
+ </figure>
29
+ <figcaption>Example radiology report describing chest X-Rays. Taken from the MIMIC-CXR <span class="citation" data-cites="MIMIC-CXR-2 MIMIC-CXR PhysioNet"></span> dataset.</figcaption>
30
+ </figure>
31
+
32
+ <figure id="fig:architecture" data-latex-placement="t">
33
+ <embed src="figures/LoVT.pdf" />
34
+ <figcaption>Architecture of LoVT. Given an image <span class="math inline">$\xs$</span> and the related report <span class="math inline">$\xr$</span>, the encoders <span class="math inline">$\Es$</span> and <span class="math inline">$\Er$</span> compute image region and report sentence representations, respectively, which are projected using <span class="math inline">$\fs$</span> and <span class="math inline">$\fr$</span>. The alignment models <span class="math inline">$\Ars$</span> and <span class="math inline">$\Asr$</span> compute cross-modal report-to-image (<span class="math inline">$\zrs$</span>) and image-to-report (<span class="math inline">$\zsr$</span>) representations which are aligned with the uni-modal representations (<span class="math inline">$\zs$</span> and <span class="math inline">$\zr$</span>) using the local losses <span class="math inline">ℒ<sub>local-image</sub></span> and <span class="math inline">ℒ<sub>local-report</sub></span>, respectively. Global image (<span class="math inline">$\ygs$</span>) and report (<span class="math inline">$\ygr$</span>) representations are computed using attention pooling on the local representations, are then projected using <span class="math inline">$\fgs$</span> and <span class="math inline">$\fgr$</span> and aligned using the global loss <span class="math inline">ℒ<sub>global</sub></span>.</figcaption>
35
+ </figure>
36
+
37
+ As shown in [1](#fig:report_example){reference-type="ref+label" reference="fig:report_example"}, a radiology report is typically split into several sections, including a *Findings* section, describing related radiological images, and an *Assessment* section, interpreting the findings. As these sections describe medical aspects observed (*Findings*) in one or more related images and conclusions (*Assessment*) drawn from it, they provide supervision for identifying relevant patterns in the images and interpretations of these patterns. Both sections can be split into sentences and each of these sentences typically describes one or a few aspects of which we assume that most are related to one or a few very localized regions in a paired image. We randomly sample one of the images related to a given report and split it into $7 \times 7$ equally-sized regions. More precisely, we augment and resize the image to a size of $224 \times 224$, feed it into a convolutional neural network and use the output feature map of size $7 \times 7$ as region representations. A language model encodes the tokens of the report as contextualized (i.e. considering their meaning in the whole report) vector representations from which we compute sentence representations. A many-to-many alignment model is then used to compute *cross-modal representations* from *uni-modal representations*, i.e. image region representations from sentence representations and vice-versa. We argue that by aligning cross-modal and uni-modal representations, the image region representations are encouraged to contain the high-level semantics present in the report.
2203.06063/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-03-30T10:32:13.692Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.146 Safari/537.36" etag="GZkK_quaOKoAZuR9B82t" version="14.4.4" type="google"><diagram id="JFFbfoSidSZS3olN7yjf" name="Page-1">7V1Zd6JKEP418xgPdDfbIwhijLjvL/cgoKAIBnH99bc7LlEhE5Kg0WQ8czLQQAP1VVVXVVcXf2B2slIDfWprvmm5fwBlrv5A+Q8ANGQY/B9pWe9aBBptW4aBY+7aXhvqzsbaNVK71rljWrOTE0Pfd0Nnetpo+J5nGeFJmx4E/vL0tIHvnt51qg+tSEPd0N1oa9sxQ3vbygPutT1vOUN7f2eaFbZHJvr+5N2bzGzd9JdHTVD5A7OB74fbrckqa7mEenu6bK/LvXH08GCB5YVJLpjVskO9vBK4Ts4UgmEACyPngebAtp+F7s53r7x73HC9p0Hgzz3TIt3Qf6C0tJ3Qqk91gxxdYthxmx1O3N3hw2tSeGfo6rPZbnsWBv74QET8+tLAcd2s7/rBy23gYDAAhoHbo2+2e9mFFYTW6qhp96aq5U+sMFjjU3ZHIeAzwg6JHes9cLv95SuOgNn1bJ9gyGXoHd/qO/YZHu7wSmG8sSPyRwhOsx8hOJWA4O/R1TAsZjA4nHl0BLJQgGZKFEfcOcU5KsPyEZqzfAYwUapD6kIUF6j3CY6Zdko2jbXrYMoH8H2y97cYFfuHBt0YD1+QK89D3I21x2er0zBH/Q2BiCwYBqZmKsgAgE5gAZCJgCLEIMJfCpE4CWDdkFDBx295jAz7PPf3Bx62lBTxCTSarl4P4q0h+b++noXWhBze94cfb9vl9oQI7piCYZw07ZHwfALjCTi7Jt11hh7hGIyLhdslgoeDxw1xd2DimCa5TSwfnQp4GtJHcRnmBGUEmcx+lDoCmqaoKNLgUkhzl0Ya/D6kBXCLSPOXRvrp1yGNBCqDhNuDWviZFiM6lyuAdWrMUEkLMcSm2UtRm44zXz4oWfBUsqjBForXtpYVmLqnR8VPnlvYlBniayTdMx1yO9Ed+oET2pNfJ5AQUacMQr/BIDHG1MWkEaH3xfEdAdRn063rPHBWhF5RM7XPM4ihYgSQN6y0BBBR9MEx2BMYxRM4hr4Xc9gQ8430zfFiTsimRF9wm/RN4A9fkL4Ko6CU6MtSN0nfOBv8LvUDuk3+jbN875K+FIjQl70B+iYwN++DvjH69wboyyQIj11S/yI5dzn9cAv0pb+VvoIkcJcb326BvuA76Ztj8e9y+pe/AfrCb+VfkRXTss9i9O8t0PfH+G8x+vcW6Pu9/hsrobTGtxj9ewv0/V7/LUX6RvUvfCMCeFX6fqf/hu0HSkgrnB3VvzdB3x/jv0X1703Q98f4b1H9ewv0ZVOYHwCfnh9QZqEz0UOMCqDGqzH+WwmsgYXJjHsAlKaHgbP6fRMF3PlsPITxnAKuOXHHJhBFyzNFkjZHiEmm4hwjDpHjOTl8Rc5x98ffpJ5lniTaRWl3RBYmhir7tsBy9dBZWCedx5Fqd4eK77xIwQEa9nxSFVDsaS8zfx4Y1u7CV5on6Yvmz54o1IOhFUb6esHv8PJfSLCIiy6kOu/u/D7pjeYO0lj3U8c/GBXl6ybWJIh6/HRRRkIEJ0BxnxPlmL7o8/y3S4tyXKAlVVEe/TpRRghm+FNUGSaDR90bkuS4+MTXYP+A/ba/VX/f0JodMUn//LTfwzgAZlhwyjmQcE6EWwCbiTH3L8cul0qejXBCJpP5xwqEFaBwbszjwYL+7iw8Ls6Yv1peWMOakVvIeqjPrPDX8QRA7HmyIBfr4MWZP5fLwf3ASgfstRMj7kDHot633Io/c0LHJ/Ts+2HoT/AJLjkgHdY2nGZfDl7WlZxjEfrTuKjNyy3FfSu1b8HbJmYjzJLbXZCbkoTDrNOSyrUl9aQOfRH/SvWmrTSHeKus4D+ynRU10l7o1wuPZKNPS1pL6RASvvzjcZu0NrxCoIPQ7bU7+RVv+rpq2912XttomyVfkWlh0O5PmHl3UnD4RXFdGnXhCP0BUklvu35fXc3K8pABptfL1ygDljaWPETl1ayvulO9JVYV8TEr2lpDWTqPTtFny6MGJZRBGHSDp/XChCbI28hY4RPaObtfGxYLBfxYChi7SrVVQ14ZmAD2nbIk5QrG2hTFXrY4adTH/AZjmcO2SW40y0/5plXCbNN+Zn2Z41SrXR8t1LKD7Lo4rDPTBT9/xI+sLh7nE5+HoFv2QGXFF70Svn69NrnFRgo1Ix+ule5qlbULRpnwlwmaolKAbFZrIFkq8YLULhu4n9ViPhDhiHYXy0J9YgVCt2FavUUFt7sBm8V8mvPrDR8/nDSfl4vYQZQcQx003U5u0prLde2RDZ8W6+piLTyHaEnLNL/qukq7Yo5aT0sk6FRD2TwH0gAITV9uDgr+XO+o7U5tYSnZ1uq5XMV3KFAc0+1N2MKw7zqj0vNmgW+zcDotoDcHdKdYw7tFRSs1IHoO7VWvVmvW+U1QUPSCjw89NqYepjd5NqttqXOQR2Az3uCORSgPgTARxZXccmYsp1C2XKpoM0wrqT+0h/MR03YGPV7SmsvNkisu5U1W9aZjSfMamlyo2UJTmnn2eFnlBtpSmqgrPlj2Sq4niqBj9gR7UnPFJZcNIWupg2q/rmlLQXuutIxCuxQ4pYBTnoHq9zybX/ml3kbVGdeUw9acLSi9oNvvMUaNo1YyM2rn6sykTpVQEzTkoqyr/FN/avP9ydOmtGzCksZrCzXPKoO6vHAktjjAL9DWKayEJAXNETNFVIFSzelEWtR700mh/cQGUO2btppDAaFxR5tLedlugYFRL6uoT0jgrafe6pnqTqcTG+ADo6dcrxeuy93BKF8YabysDgRCr2Z5uamIm6I6BQFTxBe2UGcgsFNdZQxzOmdRpxgwHccXnukGpT0TZg43LNNpT+qqDZSW9Dgou+ueZ/Wb6yWvLVvFsmUWEVVu1ceuNJ7bXnsB+rRaGA+ezUZRXnWweui3QC8YdgthWOJNkeEGQS6/mgo1b5avlGGP7zJsI1thqnJDbeI7chw/KI43WKGAPN/xecEerue4ffyk1dQWGawcM4DlMbcCMqx6tdlcAnw3h12lnFkVAEStDkRTqmcv3EAATtkOpoFu5JS5Jsz0xXCp4hON0ZjvEJHIjzesPV0TocjZWbVJWVpZs4cF33lUVWXiPA673br+6Ei2vWtbMMOlgEy+ZPVKTaY2r9nWgGVKjXmpuM6yKs0IJbdU0r3HLjUt50W5S3SjO8TaKSez+WpTfCxpplQVh9SKyolZuXHaBtVxHqoratbtqvMVZjxKGslVGvgBRkMCzZashsXlaKMZpFufJd2ulepYK5S0mlhdjqmVIsnZRkOpKkdt+ayX72eHpT7X6XRXbUvRu/aoPkYtsWRjm0hqyyO+4v/luXZtRrtqGWuiRPBQK4UM0Ewd6rqDFIo82VPVdoC6pIWi9aLOpfnAGYz6nDnNPrbj6XloG0nTJdZwkuDhP0FHUJxgnVWXRPo7pcV642AF+1iFK3tWGtTpQW/9ONPGyKzqWdKb7T/h3lQ87ODeuidtcrUk863uqNHt1wNPMnJiV6fLmlufa9OmaW3U55FW6hU1s1cW8Gidm/db3tyhJ8KTuAbFmf6koI4UPg0nhQY+2gJqrSYv9GetMGyvYKGeRbC2DkyeqxDmfTb1po4qdeTKliMrQFjUxODJ7+nmWmW4kG0tKTJQ0dUe3w5nLEv7gdleLPyRF/Id2fIL44Xx3ACWbSxHZc4qLGah2JDL9LNrdwx6uZEkIgvLOt2VLFMwN7V1ZaZwXS8sibkpC+ZIHm+8UCqTh8mN+V6dr3SqHXP+NBwMc8WXUYcQ2Rpm81yXY2sDq0JVmMdmS2D6rjw2yZgsKm6uMa7Pq5NsSlnMYD/fvPfKaDoD2YjdhZhDOOfY9DpqTt/6SpAo815IztRn9mHZzL3H5wB3tlzifIlv0uDceUfXjszxKSxuA7GROX0yxdfdlfcU+KG+8xEe0koNwPSJRnKpb3awhbgQ+9cc7C2wj950Ti7P4pMJpHcFfxpo38H8ipAg6+Z3KfM9RY6iofCz86Zn6+hoeJYccWF9jm2GOw2U2GFI6t6I2wiHY/jeg+v0Az1YZwyf+C0vJ85IvJY1MF1NJBj/0ZQgcCBDoiqpzY+eBUT5qLLe51Wd8CKd2Y+lFyiskiCR60hkd7rxr/Ohty6UkGEzLH0CBf9ac+XDc6AUl6C3i8vmx5JUfgSMePQ7I7zwFRj5BL1dGkaQIBZ9j2WOEHteJuA2qhzRIMGY9sPkBnH0YUb26+qPASBBbxeXmziv46fDyJwT/gvqjwEwQW8XhzHBqh7cjzOdveV2fSj/ees0RdCno1pSAYokpVVRgIokSQixyvCqWa40SCGjJjZc8wtTIQU+gjCdgYxw9IugDWPQpnE/FyupCJI46/cubGR4ioEiGvq+srBdKjb6+5IVGZq7A2GDCQz7FIXtYLu/K2ykZObLWh/TCXCn2+iMpc/ClFQh4s+T/R9gBoFjdEBUGmkYgw+8GDhJnICfCA7P3AE4SUz7HwgOooU7ACfJ7O0PBAeyIAIOos7Q+W5srutM3Qw2iIqsL7s9bBKUV/iJ2EAeRgvM3xo2CUozfCCWRFqslRN2iMmbYXZ73Z0BTLbl1fHOer/j4bc5uojsdvf9kZ3Xy172Xq/bT5D+1cDeBoaSmKzboE+SUeBGImMMiFmsKmQY7mim/MzQTxwmi1m7yvy960vHzODH5twTsWsyFroRuCGT3Ir98KwQDSN9A+6sc3hlwNOdsb1f/UTfqX6CLDofBNlTFUKjT+onyEWKnz8A5pv5Nd2p6d/Ar+im+BVF8wxT41c+aquDc3vwyvyKkoTqfgO/gn/8GukagIj/Qn+zPYDSTWH4PL8e891b3Eu9w70p8OutcCFkLqY12Who8LutUgTS58JXHoKA/rwO3HMz/THdW7ECB5OFTHF9mTVhYlXK3BQTx9RqeiuW9nFTNfqtwW9XpR9bjPRBJv7CMH5QyIlZ+Cvciu6VW3mYYXiWFhDFcRwNmNOUwweSgQ1eD7Of1r+kkgULziXgICDnXHxlJv6lcX/IR9zqm4v7J/kszq/wLRLHltFt+RYXjd3c4IB4gbmQ+wouC/TFgstxfX+7GX+B2YR/zuRNO5O3yIUXmOJIxFHvDoCf4OUvcCG6V0uchODQZfg1putvHyZjv1Z0nM57WgTwaym+CrnLtpgAoDTftNw3c353tQmDX1uFEEKYOfuO+34V87EPQKO4apT0YcFu+o5Aku8vfbwS8c1bU1xkYQQrfHrNUkxvCODejrXMlfVACsVs7g9VJrqg6QuoxvSG6G9FlU3gtO/rHuhGSNRhsmW53746BnFsBsDTsZRYmuxfQyakom9M1GQ/b5q+smQTeKH3CQBZdnkPALy3XClN+yY/n+jEtBE9jxRN8oPZP/vmTQFG6LxS+2lFICqm4DKKW2F/QVuHiwtIp1ESSrU8K9h9RqeBIX2bUX4q/pAXzuMCmCNY5pgDYtaIXLUoFB1Tqz/rT6bzkNSoFGfk80cv5YF+MEwCF017xz5rTL0uGCOcF4Qmhbr4saIZqYvvB//K4hPpjFZxIwmaQlRNo+vKaLpxX3AUKztJuzjEzZLEfU9Kwb1J/PfDaPvP/tyKv7Sf/dovThDOukjqKrHUaUcP5/xxac+IS7942Gc44EZwfReOz+IKhGvjmn41sZS1wQyTIIzGUV6ak9SHfFdpCDfFWgx/Xj/rAaAMxaFXL5GBKTHb0deRr8VvPHif3/5VlnzTukQR65KmmAw6mW6J2BdXrzSZpNL3P5DfNBxZLJZnGHMZjjqKE0UrSDFxUSLsd/xF/3wR4wS5Z/8wfhNjio18K/kWasTyHwiA/0M1WqAInU9o3AaqUd//+CvU57HVnGWZfQxXBPkf45sziD1TsZACcZ8nu65rzse55lf7ZN2+7H6pqH4g3n7yeB//ruZs+jID8MnY0/bLrcfB4G1/vzTkhPk1s09Q2ceIY74UwIPYb798gq/xbuATRF59BPxq9jZVBSr/Aw==</diagram></mxfile>
2203.06063/main_diagram/main_diagram.pdf ADDED
Binary file (57.7 kB). View file
 
2203.06063/paper_text/intro_method.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In the last few years, the field of NLG has made rapid progress with the advent of large-scale models trained on massive amounts of data . However, evaluation of NLG systems continues to be a challenge. On the one hand, we have automatic evaluation metrics which are easy to compute but unreliable. In particular, many studies have shown that they do not correlate well with human judgments . On the other hand, we have human evaluations, which are relatively more reliable but tedious, expensive, and time-consuming. Further, recent studies have highlighted some limitations of human evaluations that involve direct assessment on an absolute scale, e.g., Likert scale. Specifically, human evaluations using direct assessment have been shown to suffer from annotator bias, high variance and sequence effects where the annotation of one item is influenced by preceding items .
4
+
5
+ In this work, we focus on reducing the cost and time required for human evaluations while not compromising on reliability. We take motivation from studies which show that selecting the better of two options is much easier for human annotators than providing an absolute score, which requires annotators to maintain a consistent standard across samples . In particular, recent works show that ranking NLG systems using pairwise comparisons is a more reliable alternative than using direct assessment . While this is promising, a naive approach for identifying the top-ranked system from a set of $k$ systems using uniform exploration is prohibitively expensive. Specifically, uniform exploration obtains an equal number of annotations for all the $k \choose 2$ system pairs; as a result, the required human annotations grows as $O(k^2)$.
6
+
7
+ To reduce the number of pairwise annotations, we introduce Active Evaluation, a framework to efficiently identify the top-ranked NLG system. Our Active Evaluation framework consists of a learner that selects a pair of systems to compare at each time step. The learner, then, receives a feedback signal indicating the (human) preference between the selected systems on one input context, randomly sampled from the test dataset. The learner's objective is to reliably compute the top-ranked system with as few human annotations as possible. We adopt algorithms from the stochastic dueling bandits literature to decide which pair of NLG systems to compare at each time step. To check if existing dueling bandits algorithms can indeed provide reliable top-rank estimates with minimal annotations, we evaluate 13 such algorithms on 13 NLG evaluation datasets spanning five tasks viz., machine translation, summarization, data-to-text generation, paraphrase generation, and grammatical error correction. We show that the best performing dueling bandit algorithm can reduce the number of human annotations by 80\% when compared to uniform exploration.
8
+
9
+ To further reduce human annotations, we leverage automatic evaluation metrics in our Active Evaluation framework. We utilize existing automatic metrics such as BLEU , BertScore , etc for pairwise evaluations by converting the direct evaluation scores into preference probabilities using pairwise probability models. We also develop trained pairwise metrics that directly predict the comparison outcome given pairs of generated texts and context or reference as input. To incorporate such evaluation metrics in our Active Evaluation framework, we propose three model-based dueling bandits algorithms, viz., (i) Random Mixing: human annotations and evaluation metric predictions are randomly mixed, (ii) Uncertainty-aware selection: human annotations are obtained only when the predictions from the evaluation metric is highly uncertain, (iii) UCB Elimination: poorly performing NLG systems are eliminated using an Upper Confidence Bound (UCB) on the evaluation metric scores. Through our experiments, we show that the number of human annotations can be further reduced by 89\% on average (this reduction is over and above the 80\% reduction that we got earlier). In effect, we show that given $k$ systems, we can find the top-ranked NLG system efficiently with just a few hundred comparisons that vary as $O(k)$. Lastly, we provide practical recommendations to efficiently identify the top-ranked NLG system based on our empirical study on various design choices and hyperparameters.
10
+
11
+ # Method
12
+
13
+ We introduce the problem and our Active Evaluation setup in section . Later in section , we describe the different approaches to decide which pairs of NLG systems to compare at each time step. Finally, in section , we formalize the notion of top-ranked system.
14
+
15
+ We consider the problem of finding the top-ranked NLG system from a given set of $k$ systems, denoted by $\mathcal{S} = \{1, 2, \dots, k\}$. Our Active Evaluation framework consist of a learner which at each time step $t$, chooses a pair of systems $s^{(t)}_1, s^{(t)}_2 \in \mathcal{S}$ for comparison. Then, we ask human annotators to compare the outputs of the chosen systems on a randomly sampled input context and provide the comparison outcome as feedback to the learner. Specifically, we first sample an input context $X^{(t)}$ from the test dataset and obtain the generated texts ${Y}^{(t)}_{1}, {Y}^{(t)}_{2}$ from the chosen systems $s^{(t)}_1, s^{(t)}_2$. We then display the generated texts ${Y}^{(t)}_{1}, {Y}^{(t)}_{2}$ along with the context $X^{(t)}$ to human annotators and obtain a comparison outcome $w^{(t)} = 1, 0$, or $0.5$ denoting whether ${Y}^{(t)}_{1}$ is of better, worse, or equal (tie) quality as ${Y}^{(t)}_{2}$. Note that the feedback $w^{(t)}$ indicates the preference on only one input sample and not the entire test dataset. The overall framework is depicted in figure . The learner's objective is to find the top-ranked system with as few pairwise comparisons as possible.
16
+
17
+ \centering
18
+ \includegraphics[width=0.7\linewidth]{figures/ARR_Dueling-NLG.png}
19
+ \caption{Our Active Evaluation framework consisting of a learner that chooses a pair of systems to compare at each time step. The learner receives feedback from either human annotators or the automatic metric.}
20
+
21
+ The learner should decide the pair of systems $(s^{(t)}_1, s^{(t)}_2)$ to compare at each time step $t$. The naive approach is to uniformly explore all the ${k \choose 2}$ system pairs. Specifically, the probability of selecting a pair $(i,j), i\ne j$ at time $t$ is given by
22
+
23
+ P_{uniform}((s^{(t)}_1, s^{(t)}_2) = (i,j)) = \frac{1}{{k \choose 2}}
24
+
25
+ However, as we show in our experiments, the number of human annotations required to find the top-ranked system by this approach is very expensive and grows quadratically with the number of systems since we equally explore all ${k \choose 2}$ pairs. To reduce the number of annotations, we use dueling bandit algorithms to actively choose pairs of systems to compare based on the history of previous observations. We provide an overview of 13 dueling bandits algorithms proposed in the literature in appendix . We refer the readers to for a complete survey.
26
+
27
+ We now formalize the notion of the top-ranked system. Let $p_{ij}$ denote the preference probability of system $i$ over system $j$ i.e. the probability that a generated text from system $i$ is preferred over system $j$ in the test dataset. We say that a system $i$ "beats" system $j$ if $p_{ij} > \frac{1}{2}$. In other words, system $i$ beats system $j$ if the probability of winning in a pairwise comparison is larger for $i$ than it is for $j$. We define the top-ranked system $i^*$ as the one that beats all other systems, i.e. $p_{i^*j} > \frac{1}{2}, \forall j \in \mathcal{S} - i^*$.
28
+
29
+ Our Active Evaluation framework, which we described in the previous section, completely relied on human annotators to compare pairs of generated texts $({Y}_{1}, {Y}_{2})$ to provide the preference feedback $w$. We can further reduce the number of required human annotations by estimating the human preference feedback using automatic evaluation metrics. However, most existing evaluation metrics are designed for direct assessment and not directly suitable for pairwise evaluations. In this section, we describe three pairwise probability models to convert direct evaluation scores into pairwise preference probabilities. Let $f({Y})$ denote the score provided by a direct assessment metric $f$ to a generated text ${Y}$ (The dependence of $f$ on the reference/context is omitted for brevity). The pairwise preference probability $\hat{p}({Y}_{1} \succ {Y}_{2})$ between any two hypotheses ${Y}_{1}$ and ${Y}_{2}$ can be modeled in 3 different ways:
30
+
31
+ - Linear:
32
+ $$\hat{p}({Y}_{1} \succ {Y}_{2}) = \frac{1}{2} + (f({Y}_{1})- f({Y}_{2}))$$
33
+ - Bradley-Terry-Luce (BTL) :
34
+ $$\hat{p}({Y}_{1} \succ {Y}_{2}) = \frac{f({Y}_{1})}{f({Y}_{1}) + f({Y}_{2})}$$
35
+ - BTL-logistic: $$\hat{p}({Y}_{1} \succ {Y}_{2}) =\frac{1}{1 + e^{(f(Y_1) - f(Y_2))}}$$
36
+
37
+ As detailed in appendix , we appropriately preprocess the scores $f(Y)$ to ensure that preference probability lies between 0 and 1. We can now predict the comparison outcome $w$ by thresholding the preference probability at two thresholds $\tau_1$ and $\tau_2 (\geq \tau_1)$ to incorporate ties i.e.:
38
+ \[
39
+ \hat{w}=
40
+
41
+ 1, & \text{if } \hat{p}(Y_1 \succ Y_2) > \tau_2\\
42
+ 0, & \text{if } \hat{p}(Y_1 \succ Y_2) < \tau_1\\
43
+ 0.5, & \text{Otherwise}
44
+
45
+ \]
46
+ We choose $\tau_1$ and $\tau_2$ using grid search on the validation set. Refer appendix for more details.
47
+
48
+ In the previous section, we discussed pairwise probability models to obtain the estimated preference probability $\hat{p}(Y_1 \succ Y_2)$ and the comparison outcome $\hat{w}$ using scores assigned by direct assessment metrics. We now propose three model-based dueling bandit algorithms wherein we combine such predictions from evaluation metrics with human annotations in the Active Evaluation framework.
49
+
50
+ Here, we randomly provide either the real (human) or the evaluation metric predicted feedback to the learner. Specifically, at any time $t$, we use the predicted comparison outcome $\hat{w}^{(t)}$ as the feedback with probability $p_{m}$ and use human annotations ${w}^{(t)}$ as feedback with probability $1-p_{m}$. The hyperparameter $p_{m}$ controls the ratio of estimated and real feedback given to the learner. As with other hyperparameters, we tune $p_{m}$ on the validation set.
51
+
52
+ In this algorithm, we estimate uncertainty in the evaluation metric predictions and decide to ask for human annotations only when the evaluation metric is highly uncertain. We specifically focus on trainable neural evaluation metrics such as Bleurt where we estimate the prediction uncertainty using recent advances in Bayesian deep learning. Let $\hat{p}(Y_1 \succ Y_2 | \theta)$ denote the preference probability modelled by a neural evaluation metric with parameters $\theta$. Given a training dataset $\mathcal{D}^{tr}$, Bayesian inference involves computing the posterior distribution $p(\theta | \mathcal{D}^{tr})$ and marginalization over the parameters $\theta$:
53
+
54
+ \hat{p}(Y_1 \succ Y_2 | \mathcal{D}^{tr}) = \int_{\theta} &\hat{p}(Y_1 \succ Y_2 | \theta) \hat{p}(\theta | \mathcal{D}^{tr})d\theta
55
+
56
+ However, computing the true posterior and averaging over all possible parameters is intractable in practice. Hence, several approximations have been proposed in variational inference such as finding a surrogate distribution $q_{\phi}(\theta)$ for the true posterior. have shown that we can use the Dropout distribution as the approximate posterior $q_{\phi}(\theta)$. Specifically, we can perform approximate Bayesian inference by applying Dropout during test time. Hence, the posterior can now be approximated with Monte-carlo samples as follows:
57
+
58
+ \hat{p}(Y_1 \succ Y_2 | \mathcal{D}^{tr}) &\approx \frac{1}{L} \sum_{l=1}^{L} \hat{p}(Y_1 \succ Y_2 | \theta_{l})
59
+
60
+ where $\{\theta_{l}\}_{l=1}^{L}$ are $L$ samples from the Dropout distribution $q_{\phi}(\theta)$ (i.e. we apply Dropout $L$ times independently during testing). We now discuss two different Bayesian uncertainty measures:
61
+ {\flushleft BALD:} The Bayesian Active Learning by Disagreement (BALD) is defined as the mutual information between the model predictions and the model posterior. Let $p_l = \hat{p}(Y_1 \succ Y_2 | \theta_{l})$, where $\theta_l \sim q_{\phi}(\theta)$, be the evaluation metric prediction using the $l^{th}$ sample $\theta_l$ from the Dropout distribution. Also, let $\bar{p} = \frac{1}{L} \sum_{l=1}^{L} p_{l}$ be the mean prediction. As shown in , we can approximate the BALD measure using samples from the Dropout distribution as:
62
+
63
+ \hat{\mathbb{I}} = \mathbb{H}(\bar{p}) - \frac{1}{L} \sum_{l=1}^{L} \mathbb{H}(p_l)
64
+
65
+ where $\mathbb{H}$ is the binary cross entropy function. The BALD uncertainty score is essentially the difference in entropy of the mean prediction $\bar{p}$ and the average entropy of the individual predictions $\{p_{l}\}_{l=1}^{L}$. Hence, the BALD uncertainty score is high when the metric's mean prediction is uncertain (high entropy) but the individual predictions are highly confident (low entropy), i.e., when the metric produces disagreeing predictions with high confidence.
66
+
67
+ {\flushleft STD:} We also adopt the standard deviation of the preference probability taken over the posterior distribution as a measure of uncertainty:
68
+
69
+ \sigma = \sqrt{\mbox{Var}_{\theta \sim \hat{p}(\theta | \mathcal{D}^{tr})} (\hat{p}(Y_1 \succ Y_2 | \theta))}
70
+
71
+ Similar to BALD, we can approximate the above measure using the empirical standard deviation of samples drawn from the dropout distribution.
72
+
73
+ Our proposed algorithm asks for human annotations only if the uncertainty measure (BALD or STD) is above a particular threshold.
74
+
75
+ The key idea here is to eliminate a set of "poorly performing" NLG systems using the automatic metric and perform human evaluations with the remaining set of systems. To eliminate sub-optimal systems, we first need to quantify a performance measure for the systems. We use the Copeland score which is defined as the normalized total number of pairwise wins for a system: $C_{i} = \frac{1}{k-1} \sum_{j \ne i} \mathbbm{1}({p}_{ij} > \frac{1}{2})$. Copeland score is the highest for the top-ranked system with a value of 1 and it is less than 1 for all other systems. To estimate the Copeland score, we first predict the pairwise preference probability between any two systems $i$ and $j$ as follows:
76
+
77
+ \hat{p}_{ij} = \frac{1}{N} \sum_{Y_1, Y_2 \in \mathcal{D}_{ij}} \hat{p}({Y}_{1} \succ {Y}_{2}| \theta)
78
+
79
+ where $\mathcal{D}_{ij}$ is the test dataset consisting of generated texts from systems $i$ and $j$, $N$ is the total number of test examples, $\theta$ is the learned model parameters. We can now estimate the Copeland score $
80
+ \hat{C}_{i}$ using the estimated preference $\hat{p}_{ij}$ and eliminate all systems with Copeland scores below a threshold. However, a major problem with this approach is that evaluation metrics are often inaccurate and we could wrongly eliminate the true top-ranked system without performing any human evaluations. For example, consider the example where $i^*$ is the top-ranked system with $p_{i^{*}j} > 0.51$ $,\forall j \in \mathcal{S} - i$. If several of the predicted probabilities $\hat{p}_{i^{*}j}$ are less than $0.5$, our top-ranked system $i^{*}$ will receive a low estimated Copeland score and will be incorrectly eliminated. To overcome this problem, we define an Upper Confidence Bound (UCB) on the preference probability using uncertainty estimates that we described in . Specifically, the upper confidence bound $\hat{u}_{ij}$ is given by $\hat{u}_{ij} = \hat{p}_{ij} + \alpha \hat{\sigma}_{ij}$ where $\alpha$ is a hyperparameter that controls the size of the confidence region and $\hat{\sigma}_{ij}^{2}$ is the estimated variance given by:
81
+
82
+ \hat{\sigma}_{ij}^{2} = \frac{1}{N^2}
83
+ \sum_{Y_1, Y_2 \in
84
+ \mathcal{D}_{ij}} \mbox{Var}_{\theta \sim q_{\phi}(\theta)}\hat{p}(Y_1 \succ Y_2 | \theta)
85
+
86
+ where $q_{\phi}(\theta)$ is the Dropout distribution. Using the upper confidence estimates $\hat{u}_{ij}$, we now define the optimistic Copeland score for a system $i$ as $\hat{C}_{i}^{u} = \frac{1}{K-1} \sum_{j \ne i} \mathbbm{1}(\hat{u}_{ij} > \frac{1}{2})$. Here, we consider a system $i$ to beat another system $j$ ($\hat{u}_{ij} > 0.5$) if either the estimated preference is high ($\hat{p}_{ij}$ is high) or if there is an high uncertainty in the estimation ($\hat{\sigma}_{ij}$ is high). In UCB Elimination, we eliminate a system only if the optimistic Copeland score is below a threshold.
2203.16910/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2203.16910/paper_text/intro_method.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Trajectory prediction has gained increasing attention due to its emerging applications such as robot navigation and self-driving cars. Due to the inherent multimodal uncertainty from an agent's intention or environment, a large number of works have been proposed to learn a multimodal distribution of the future trajectories. For example, in [20, 26], the multimodal distribution is explicitly modeled using the Gaussian mixture model, though it is hard to
4
+
5
+ ![](_page_0_Figure_10.jpeg)
6
+
7
+ Figure 1. Illustration of trajectory prediction distributions of P2T and our method on Stanford Drone Dataset. Although the prediction of P2T is more diverse, it predicts many infeasible outcomes (e.g., those trajectories intersecting with the parterre) and assigns too high probability to the turning action.
8
+
9
+ optimize and prone to overfitting. Others have attempted to model the trajectory distribution implicitly using generative models such as conditional variational autoencoder (CVAE) [6, 19, 28, 44], normalizing flow (NF) [34, 38], or generative adversarial network (GAN) [2, 8, 9, 11, 43].
10
+
11
+ However, most previous works focus on the diversity of the predicted trajectories rather than the more important precision, except a few works (*e.g*. [34, 38]). The issue is that if the model is only encouraged to cover all modes of real distribution, it may assign too many probabilities to unrealistic predictions and cannot accurately reflect the real probability density. One such example is shown in Fig. 1 where a large portion of the diverse trajectories predicted by P2T [6] turn and intersect with obstacles, which are certainly implausible and inconsistent with common knowledge that moving straight ahead is more likely than turning. In such circumstances, a navigation decision based on the predictions will overreact to less likely futures, while underestimating the more likely ones.
12
+
13
+ Specifically, to learn a diverse trajectory distribution, previous works usually minimize the variety loss [6, 9, 13] or the forward cross-entropy [23, 26, 44]. Yet, the variety loss does not penalize bad predictions as long as there exists one prediction close to the ground-truth, and it does not lead to ground-truth distribution but approximately its square root [46]. On the other hand, the forward cross-
14
+
15
+ <sup>\*</sup>Jia Pan is the corresponding author. This project is supported by HK-SAR RGC GRF 11202119, 11207818, T42-717/20-R, HKSAR Technology Commission under the InnoHK initiative, and the National Natural Science Foundation of China (Grant No. 62072110).
16
+
17
+ entropy also fails to adequately penalize the unlikely predictions [34,38] and exhibits noise sensitivity [48]. To overcome the limitations of these losses, our solution is to learn a distribution minimizing the symmetric cross-entropy, *i.e*. the combination of forward and reverse cross-entropy between the predictive distribution and ground-truth distribution. Compared with the forward cross-entropy, the reverse cross-entropy can penalize the prediction with low likelihood, but it requires ground-truth distribution as a reference, which unfortunately is not available in many cases. An effective solution is to employ an occupancy grid map (OGM), which divides the social space into grid cells with an occupancy probability in each cell. Thus, the trajectory probability can be approximated as the product of all future position probabilities conditioned on the OGM. In [38], an OGM, parameterized as a cost map, is embedded from spatial scene features by a convolutional neural network (CNN) to assign proper probabilities to different social areas. However, representing all future position distributions with a single OGM is inaccurate, since it neglects the spatialtemporal correspondence of trajectories. Instead, we predict an OGM for each future position with a convolutional long short-term memory (ConvLSTM) [51] network based on our novel deconvolution parameterization of the position probability flow. The resulting dynamic OGMs can help not only the trajectory prediction [23] but also downstream planning tasks [4, 53].
18
+
19
+ When minimizing the symmetric cross-entropy, previous approaches [34, 38] usually make use of the normalizing flow, which transforms a simple Gaussian distribution into the target trajectory distribution through a sequence of auto-regressive mappings. These mappings are required to be invertible, differentiable, and easy for computing Jacobian determinants, which are difficult to be satisfied in practice. In addition, the latent variable sampled from the Gaussian distribution is hard to interpret. To address these issues, we develop an end-to-end interpretable model to backpropagate the symmetric cross-entropy loss. In particular, we construct a CVAE model using a coarse future trajectory plan within neighboring grids as the interpretable latent variable, similar to P2T [6]. However, P2T cannot be trained in an end-to-end manner, because it learns the planning policy using the maximum-entropy inverse reinforcement learning (MaxEnt IRL) [50, 58] by matching feature expectation. Instead, we implement value iteration in IRL by differentiable value iteration network (VIN) [45] and incorporate Gumbel-Softmax [15] into the discrete planning policy sampling. In our VIN-based IRL, planning and trajectory generation policy can be learned simultaneously by maximizing the data likelihood.
20
+
21
+ Even though a large number of possible future trajectories can be sampled from the learned distribution, many downstream applications often demand a small set of representative predictions. This requirement is traditionally accomplished by learning the distribution model with the variety loss [5, 9, 13] or post-processing with heuristic methods like greedy approximation [36] or K-means [6, 7]. Motivated by the insight that clustering like K-means can be regarded as paying different attention to different samples, we propose a Transformer-based refinement network, whose attention mechanism can also ensure sampling diversity, to attentively obtain a small set of representative samples from the over-sampled outcomes of our prediction model. The representative properties can be conveniently adjusted by its loss, *e.g*. the variety loss for diversity. In experiments, we compare our method with a set of state-of-the-art approaches on the Stanford Drone Dataset [40] and Intersection Drone Dataset [3] and demonstrate the superiority of our method in both trajectory diversity and quality.
22
+
23
+ In summary, the main contributions are as follows.
24
+
25
+ - We propose a VIN-based IRL method, simplifying the learning process while allowing the gradients from trajectory generation to flow back to the planning module.
26
+ - We improve the approximation of ground-truth with OGMs in learning trajectory distribution using symmetric cross-entropy.;
27
+ - We introduce a Transformer-based refinement network for sampling from trajectory distribution to obtain representative and realistic trajectories;
28
+ - We demonstrate the state-of-the-art performance of our framework on two real-world datasets: Stanford Drone dataset [40] and Intersection Drone dataset [3].
29
+
30
+ # Method
31
+
32
+ Given an observation $\Omega$ including a context and history trajectory $X = \{X_t \in \mathbb{R}^2 \mid t = -t_p + 1, \dots, 0\}$ of a target agent, our objective is to predict the distribution $p(Y|\Omega)$ of its future trajectory $Y = \{Y_t \in \mathbb{R}^2 \mid t = 1, \dots, t_f\}$ . The context consists of neighbors' history trajectories and an image $\mathbf{I}$ , which is a bird's eye view (BEV) perception of the local scene centered at the agent's current position.
33
+
34
+ We assume that an agent has a grid-based plan on which its future trajectory is conditioned. An agent's planning process is modeled using a Markov decision process (MDP) $\mathcal{M} = \{S, A, T, \mathbf{r}\}\$ , with a time horizon N. A state set S consists of all cells over a 2D grid and an absorbing end state of zero value. An action set A includes 4 adjacent movements up, down, left, right and an end action leading to the absorbing state. A deterministic transition function $\mathcal{T}: \mathcal{S} \times \mathcal{A} \to \mathcal{S}$ describes system dynamics. A nonstationary reward function $\mathbf{r}^n: \mathcal{S} imes \mathcal{A} o \mathbb{R}$ determines a reward for each state and action per step n. We assume that the agent uses a non-stationary stochastic policy $\pi^n(a|s)$ to determine the probability of selecting an action a at a state s at MDP step n, and finally make a plan in terms of the state sequence $S = \{s^n \in \mathcal{S} \mid n = 1, ..., N\}$ . Note that here we are using the superscript n as the MDP step n, to distinguish with the time step t as subscript.
35
+
36
+ To relieve the difficulty of modeling the multi-modal future trajectory distribution $p(Y|\Omega)$ , we introduce the plau-
37
+
38
+ sible plan as the latent variable and decompose it as:
39
+
40
+ $$p\left(Y|\Omega\right) = \int_{S \in \mathbb{S}(\Omega)} p\left(S|\Omega\right) p\left(Y|S,\Omega\right) dS,$$
41
+
42
+ where $\mathbb{S}(\Omega)$ is the space of plausible plans conditioned on the observation. In this way, since the plan uncertainty can well capture the multimodality, trajectory conditioned on a plan can be well approximated as a unimodal distribution.
43
+
44
+ We predict the future trajectory distribution by minimizing the discrepancy between the distribution $q_{\theta}(\hat{Y}|\Omega)$ of the predicted trajectory $\hat{Y}$ and the ground-truth distribution $p(Y|\Omega)$ . As a straightforward distance metric between these two distributions, forward cross-entropy (a.k.a, negative log-likelihood (NLL)) is computed as:
45
+
46
+ $$\mathcal{H}\left(p,q_{\theta}\right) = -\mathop{\mathbb{E}}_{\Omega \sim \Psi, Y \sim p(\cdot \mid \Omega), S \in \mathbb{S}(Y)} \left[\log q_{\theta}(S \mid \Omega) q_{\theta}(Y \mid \Omega, S)\right],$$
47
+
48
+ where $\Psi$ denotes the ground-truth observations' distribution and $\mathbb{S}(Y)$ is the space containing the ground-truth plan S, *i.e.* the grid state sequence the trajectory Y goes through.
49
+
50
+ Although the NLL loss encourages the predicted distribution to cover all plausible modes of the ground-truth distribution, it assigns a low penalty to the implausible predictions which are less likely to take place under the ground-truth distribution [34, 38]. The reverse crossentropy $\mathcal{H}(q_\theta,p)$ can evaluate the likelihood of the prediction under the ground-truth distribution and penalize unlikely predictions, but the ground-truth distribution p is unknown in the real world with only one sample observed. To address this issue, we approximate the continuous joint distribution $p(Y|\Omega)$ of future trajectory as a product of future positions' categorical marginal distributions $\mathbf{O} = \{\mathbf{O}_t \mid t=1,\ldots,t_f\}$ , represented as OGMs:
51
+
52
+ $$p(Y|\Omega) \approx p(\mathbf{O}|\Omega) \prod_{t=1}^{t_f} \mathbf{O}_t(Y_t),$$
53
+
54
+ where $\mathbf{O}_t(Y_t)$ denotes the agent's location probability at $Y_t$ at time t, which is bilinearly interpolated from nearby probabilities on $\mathbf{O}_t$ and $p(\mathbf{O}|\Omega)$ is assumed to be deterministic and parameterized by neural networks $\mathbf{O} = o_{\alpha}(\Omega)$ . Thus, the reverse cross-entropy $\mathcal{H}(q_{\theta}, p)$ can be approximated as:
55
+
56
+ $$\mathcal{H}(q_{\theta}, \mathbf{O}) = - \underset{\Omega \sim \Psi, \hat{Y} \sim q_{\theta}(\cdot | \Omega)}{\mathbb{E}} \log p(\mathbf{O} | \Omega) \prod_{t=1}^{t_f} \mathbf{O}_t(\hat{Y}_t).$$
57
+
58
+ As shown in Fig. 2, our model is composed of five modules that can be learned in an end-to-end manner: an **Observation Encoder**, a **Policy Network**, an **Occupancy Grid Maps Decoder** (OGMs Decoder), a **Trajectory Decoder** and a **Refinement Network**.
59
+
60
+ The first component of our approach is an observation encoder composed of a motion encoder to extract motion features from the past trajectories of the target and its neighbors and a scene encoder to extract scene features from the BEV image of the surrounding environment.
61
+
62
+ **Motion encoder:** The motion encoder is designed to embed the past trajectories of the target agent and its neighbors into a feature vector and a feature map. To represent the neighbors' state succinctly, we leverage a directional pooling grid from [18], where each cell contains the relative velocity of a neighbor located in that cell with respect to the target agent. At each past time step t, we first flatten the grid into a vector $d_t$ and then concatenate the vector with the agent velocity $X_t - X_{t-1}$ as input to an RNN. The hidden state of the RNN at time t is given by:
63
+
64
+ $$m_t = \text{RNN}_{\text{m}} (m_{t-1}, \phi [d_t, X_t - X_{t-1}]),$$
65
+
66
+ where $\phi$ is a linear embedding layer and the brackets indicate concatenation. The first hidden state $m_{-t_p+1}$ is set to zero and the last hidden state $m_0$ is regarded as the motion feature. The $m_0$ is duplicated over all cells in the scene and then is concatenated with each cell's agent-centered, world-aligned coordinate to construct a motion feature map M:
67
+
68
+ $$\mathbf{M}(x,y) = [m_0, x, y].$$
69
+
70
+ **Scene encoder:** We apply a CNN to extract a scene feature map from the BEV image **I** of the neighborhood:
71
+
72
+ $$\mathbf{F} = \mathrm{CNN}_{\mathrm{f}}(\mathbf{I}),$$
73
+
74
+ where the spatial dimensions of the scene feature map $\mathbf{F}$ are the same as that of the MDP grid for simplicity.
75
+
76
+ We generate a policy in two steps end-to-end: mapping the observation features into rewards and then computing a policy with a value iteration network.
77
+
78
+ We adopt non-stationary rewards to capture the dynamic agent-to-scene and agent-to-agent interaction. Based on the scene and motion feature maps, a ConvLSTM architecture is applied to yield the reward map at each step. The ConvL-STM hidden map and the reward map at MDP step n are:
79
+
80
+ $$\mathbf{H}^n = \text{ConvLSTM}_r(\mathbf{H}^{n-1}, \mathbf{F}), \quad \mathbf{r}^n = \Phi(\mathbf{H}^n),$$
81
+
82
+ where $\Phi$ is a fully connected convolutional layer. The initial hidden map $\mathbf{H}^0$ is the embedded motion feature map $\Phi(\mathbf{M})$ .
83
+
84
+ Based on the reward maps, we use the approximate value iteration to generate a policy map $\pi^n$ at each step n. To back-propagate the loss through the value iteration, we take advantage of the value iteration network as [35, 37, 45],
85
+
86
+ ![](_page_4_Figure_0.jpeg)
87
+
88
+ Figure 2. Overview of our approach.
89
+
90
+ ```
91
+ Input: \mathbf{r}^{n}(s, a)
92
+ Output: \boldsymbol{\pi}^{n}(a|s)
93
+ 1: \mathbf{V}^{N}(s) = 0, \forall s \in \mathcal{S};
94
+ 2: for n = N, \dots, 2, 1 do
95
+ 3: \mathbf{Q}^{n}(s, a) = \mathbf{r}^{n}(s, a) + \mathbf{V}^{n}_{s' = \mathcal{T}(s, a)}(s'), \ \forall s \in \mathcal{S}, \ \forall a \in \mathcal{A};
96
+ 4: \mathbf{V}^{n-1}(s) = \operatorname{logsumexp}_{a} \mathbf{Q}^{n}(s, a), \forall s \in \mathcal{S};
97
+ 5: \boldsymbol{\pi}^{n}(a|s) = \operatorname{softmax}_{a} \mathbf{Q}^{n}(s, a), \forall s \in \mathcal{S};
98
+ 6: end for
99
+ ```
100
+
101
+ which recursively computes the next value map by a convolution of the current value map with transition filters. To improve the value iteration network's performance, we utilize an approximate value iteration in the MaxEnt IRL formulation [50,58] with non-stationary rewards. Algo. 1 describes the overall computation process of this network.
102
+
103
+ To provide an explicit approximation of the ground-truth trajectory distribution, we predict a sequence of dynamic OGMs based on the observation features using a ConvL-STM network. With the scene feature map as input, the hidden map of the ConvLSTM network at time t is:
104
+
105
+ $$\mathbf{H}_{t} = \text{ConvLSTM}_{o}\left(\mathbf{H}_{t-1}, \mathbf{F}\right),$$
106
+
107
+ The hidden map is initialized with the embedding of the motion feature map $\mathbf{H}_0 = \Phi(\mathbf{M})$ .
108
+
109
+ Then, instead of directly outputting an OGM from each hidden map, we derive a pixel-adaptive normalized deconvolution filter whose weights are spatially varying, nonnegative and sum to one. The deconvolution is subsequently applied to the last OGM to obtain the next one:
110
+
111
+ $$\mathbf{O}_t = \text{Deconv}\left(\mathbf{O}_{t-1}, \text{softmax}(\Phi(\mathbf{H}_t))\right),$$
112
+
113
+ where the initial OGM $O_0$ is a probability matrix to be learned. Our deconvolution method can directly model the
114
+
115
+ probability density transition process. Besides, the limited size of the normalized deconvolution kernel ensures that the probability mass diffuses into nearby grid cells in a conservative manner, reflecting the prior knowledge that agents do not suddenly disappear or jump between distant locations.
116
+
117
+ Conditioned on a plan from the policy roll-out or the data, an RNN decoder is applied to generate the future position distribution recursively based on local features.
118
+
119
+ **Plan sampling:** We generate a plan $\hat{S} = \{\hat{s}^n \in \mathbb{R}^2 \mid n = 1, \dots, N\}$ by sampling the non-stationary policy outputted by the policy network. However, directly sampling the policy with discrete state and action spaces will introduce difficulty in loss back-propagation. To overcome this difficulty, we sample the policy with the Gumbel-Softmax trick [15], resulting in continuous action and state. Besides, we obtain the policy at continuous state $\hat{s}^n$ by bilinear interpolation.
120
+
121
+ **Plan encoder:** Given a ground-truth plan S (or a sampled plan $\hat{S}$ ), we first collect the local scene feature from scene feature map $\mathbf{F}$ and non-stationary feature from the corresponding hidden map of $\mathrm{ConvLSTM_r}$ at each plan state. Then we concatenate these features with the state's coordinates as input to an RNN, whose hidden state at step n is:
122
+
123
+ $$h^n = \mathrm{RNN_s} \left( h^{n-1}, \phi \left[ s^n, \mathbf{F}(s^n), \mathbf{H}^n(s^n) \right] \right).$$
124
+
125
+ Since the sampled plan's state $\hat{s}^n$ is on the continuous plane, the local features like $\mathbf{F}(s^n)$ are gathered by bilinear interpolation at the spatial dimensions of the feature map $\mathbf{F}$ corresponding to the physical position $s^n$ . Fig. 3 illustrates how the plan encoder extracts the plan features $h^{1:N} = \{h^n \mid n=1,\ldots,N\}$ .
126
+
127
+ **Multi-head attention based decoder:** Since different dimensions of the plan features at different steps may have different impacts on current hidden state [30], we utilize a multi-head scaled dot product attention module [47] to ag-
128
+
129
+ ![](_page_5_Picture_0.jpeg)
130
+
131
+ Figure 3. The local scene and non-stationary features at each plan state are concatenated with its location coordinates and then fed into an RNN to obtain all plan features.
132
+
133
+ gregate the plan information:
134
+
135
+ $$\begin{split} & \text{MultiHead}(Q, K, V) = [\text{Att}(QW_i^Q, KW_i^K, VW_i^V)_{i=1}^H]W^O, \\ & \text{where } \text{Att}(Q_i, K_i, V_i) = \text{softmax}(\frac{Q_iK_i^T}{\sqrt{d_k}})V_i, \end{split}$$
136
+
137
+ where $d_k$ is the dimension of each head. At each future time t, we linearly project the trajectory decoder's previous hidden state $h_{t-1}$ into the query $Q_i$ and the plan features into the key $K_i$ and value $V_i$ through linear layers $W_i^Q$ , $W_i^K$ and $W_i^V$ . The attention module output $a_t$ is then concatenated with coordinates and local bilinearly interpolated features on scene feature map and corresponding OGM hidden map at the previous position $Y_{t-1}$ as input to an RNN decoder:
138
+
139
+ $$a_t = \text{MultiHead}(h_{t-1}, h^{1:N}, h^{1:N}),$$
140
+
141
+ $h_t = \text{RNN}_t (h_{t-1}, \phi [a_t, Y_{t-1}, \mathbf{F}(Y_{t-1}), \mathbf{H}_t(Y_{t-1})]),$
142
+
143
+ where the initial hidden state $h_0$ is the embedded motion feature $\phi(m_0)$ . Then, the hidden state $h_t$ is utilized to predict the position $\hat{Y}_t$ distribution which is assumed to be a bivariate Gaussian distribution parameterized with the mean $\mu_t + Y_{t-1}$ , standard deviation $\sigma_t$ , and correlation $\rho_t$ :
144
+
145
+ $$[\mu_t, \sigma_t, \rho_t] = h_t W^P, \quad \hat{Y}_t \sim \mathcal{N}(\mu_t + Y_{t-1}, \sigma_t, \rho_t).$$
146
+
147
+ During generating predictions $\hat{Y}$ , the above ground-truth position $Y_{t-1}$ is substituted by the position $\hat{Y}_{t-1}$ sampled from the predicted distribution with the reparameterization trick [17] to ensure differentiability.
148
+
149
+ We design a refinement network to present a succinct representation of a trajectory distribution with several representative trajectories. The network is an encoder-decoder framework based on Transformer [47] but without positional embedding and auto-regressive decoding because the multi-head attention module in Transformer can well capture the relation between unordered samples to ensure diversity. We first over-sample a large number of trajectory samples $\{\hat{Y}^{(1)}, \hat{Y}^{(2)}, \ldots, \hat{Y}^{(C)}\}$ to cover the trajectory distribution, e.g. C=200. Then, all trajectory samples are flattened into vectors and embedded as input to the
150
+
151
+ Transformer encoder without the positional embedding. To save inference time, we utilize a generative style decoder like [57] but the inputs to our decoder are the summations of the embedded motion features and K different parameter vectors instead of fixed tokens. Finally, we embed the decoder output to obtain a few representative trajectories $\{\tilde{Y}^{(1)}, \tilde{Y}^{(2)}, \dots, \tilde{Y}^{(K)}\}$ , e.g. K=20.
152
+
153
+ To achieve different goals including a good OGM, distribution and representative sets at different steps, our training process has the following four steps:
154
+
155
+ OGMs learning: The observation encoder and OGMs decoder are trained to predict OGMs by minimizing the NLL loss:
156
+
157
+ $$\mathcal{H}(p, \mathbf{O}) = - \underset{\Omega \sim \Psi, Y \sim p(\cdot | \Omega), \mathbf{O} = o_{\alpha}(\Omega)}{\mathbb{E}} \log \prod_{t=1}^{t_f} \mathbf{O}_t(Y_t).$$
158
+
159
+ 2. Trajectory distribution learning: Based on the learned observation encoder and OGMs decoder, we train the policy network and trajectory decoder to induce a trajectory distribution that minimizes the approximated symmetric cross-entropy loss:
160
+
161
+ $$\mathcal{L}_{\text{sce}} = \mathcal{H}(p, q_{\theta}) + \beta \mathcal{H}(q_{\theta}, \mathbf{O}).$$
162
+
163
+ 3. **Representative trajectories learning:** Using the trajectories sampled from the learned distribution, we train the refinement network to generate representative trajectories with the variety (MoN) loss [9]:
164
+
165
+ $$\mathcal{L}_{\text{variety}} = \min_{k \in \{1, \dots, K\}} \|Y - \tilde{Y}^{(k)}\|_2.$$
166
+
167
+ 4. **End-to-end fine-tuning:** We fine-tune the whole network in an end-to-end manner with the variety loss.
168
+
169
+ Only the first two steps are required for learning a trajectory distribution while all four steps are for obtaining a compact set of representative trajectories.
2207.04174/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2207.04174/paper_text/intro_method.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Vision-language models combine deep learning techniques from computer vision and natural language processing to assimilate visual and textual understanding. Such models demonstrate visual and linguistic knowledge by performing tasks such as vision question answering (VQA) and image captioning. There are many applications of these tasks, including aiding the visually impaired by providing scene information and screen reading [\(Morris et al.,](#page-9-0) [2018\)](#page-9-0).
4
+
5
+ To perform a vision-language task, a model needs to understand visual context and natural language, and operate in a shared embedding space
6
+
7
+ <span id="page-0-0"></span>![](_page_0_Figure_11.jpeg)
8
+
9
+ Figure 1: Our captioning model accepts tokens from several upstream classifiers, learns representations for tokens from different classifiers, and uses each token appropriately. By using the facial recognition token 'Bernie Sanders', our model's caption is more informative than previous work which just uses OCR.<sup>1</sup>
10
+
11
+ between the two. Approaches in the literature have improved performance by pre-training models for both visual context and language understanding [\(Chen et al.,](#page-8-0) [2020;](#page-8-0) [Lu et al.,](#page-9-1) [2019;](#page-9-1) [Su et al.,](#page-9-2) [2019;](#page-9-2) [Li et al.,](#page-9-3) [2020;](#page-9-3) [Tan and Bansal,](#page-9-4) [2019\)](#page-9-4). These models have yielded accurate and semantically appropriate VQAs or captions. However, the text generated from these models are general and overlook content that allow for richer text generation with improved contextualization. For example, they ignore clearly visible text or the presence of well-known individuals.
12
+
13
+ To improve specificity in generated text, recent work has used optical character recognition (OCR)
14
+
15
+ <sup>1</sup>The previous model in Figure [1](#page-0-0) is M4C Captioner [\(Sidorov et al.,](#page-9-5) [2020\)](#page-9-5) with weights from the M4C repository.
16
+
17
+ to incorporate text that appears in images [\(Zhu](#page-10-0) [et al.,](#page-10-0) [2021;](#page-10-0) [Gao et al.,](#page-8-1) [2020b;](#page-8-1) [Mafla et al.,](#page-9-6) [2021;](#page-9-6) [Hu et al.,](#page-8-2) [2020;](#page-8-2) [Kant et al.,](#page-8-3) [2020;](#page-8-3) [Wang et al.,](#page-9-7) [2021;](#page-9-7) [Han et al.,](#page-8-4) [2020;](#page-8-4) [Liu et al.,](#page-9-8) [2020;](#page-9-8) [Yang et al.,](#page-9-9) [2021\)](#page-9-9). In many cases, this significantly enhances the usefulness of the generated text [\(Hu et al.,](#page-8-2) [2020\)](#page-8-2). Such frameworks include OCR as an additional input modality. This results in three modalities for VQA (image, question, and OCR) and two modalities for image captioning (image and OCR).
18
+
19
+ While using OCR allows enhancement of some generated text, specific information that exists in human-level description may also come from additional sources. Without proper nouns or other specific vocabulary, the generated text is at the risk of being awkwardly general, demonstrating a lack of shared knowledge that is expected in society. For example in Figure [1,](#page-0-0) arguably the most relevant content in the image is the presence of a wellknown political figure. Consequently, a reasonable description of the image should include the name of the well-known figure, which is 'Bernie Sanders' is in this case, instead of generic "a man". This is notably absent in the caption from the previous model.
20
+
21
+ In this work, we propose the *special token approach*, a novel method for integrating tokens from several upstream vision classifiers into image captions.[2](#page-1-0) We generalize the OCR input modality to accept additional helpful outputs from any number of auxiliary classifiers (Section [3.2\)](#page-4-0). We use a rich feature representation for upstream tokens that allows the captioning model to learn to differentiate tokens from different classifiers (Section [3.3\)](#page-4-1).
22
+
23
+ This method potentially allows a model to leverage easily available sophisticated libraries to recognize faces, scene-text, cityscapes, animal species, etc. We refer to all tokens from upstream sources, including OCR tokens, as special tokens. In this work, we focus on using person names and scenetext as example special tokens.
24
+
25
+ To facilitate using person names in image captions, we create a novel image-caption dataset, Politicians and Athletes in Captions (PAC), which includes person names in captions in addition to relevant scene-text found on signs, labels, or other entities in the image. PAC has 1,572 images and three captions per image. A discussion on the dataset is
26
+
27
+ provided in Section [4.](#page-5-0)
28
+
29
+ By training on PAC in addition to other imagecaption datasets, we create a model that can naturally integrate person names into captions. The same model still performs well on previous image captioning benchmarks. Evaluation of the methods is available in Section [5.](#page-6-0)
30
+
31
+ In summary, this paper makes three primary contributions. The special tokens framework is proposed as a method to incorporate tokens from several external sources into generated text. The PAC image-captioning dataset is collected and baseline results are presented. Lastly, this paper demonstrates the first model in the literature that integrates both facial recognition and OCR into image captioning.
32
+
33
+ # Method
34
+
35
+ We use the term *special token* as a placeholder for extracted relevant information that is identified in an image by upstream sources. Tokens from upstream classifiers are *special* in that they often are named entities, offering unique descriptors for generic objects. For example in Figure [1,](#page-0-0) 'Bernie Sanders' is not a new object, but rather a special descriptor for an already recognized generic object (i.e. man). Likewise, 'this week' is not a generic temporal entity. Instead, it can be used to give more detail about a generic object: a screen that says 'this week', referring to a TV show or event called 'this week'.
36
+
37
+ We call our corresponding method for integrat-
38
+
39
+ <span id="page-3-0"></span>![](_page_3_Figure_0.jpeg)
40
+
41
+ Figure 3: The representation of a special token where N is the number of tokens and d is the dimensionality. We adopt the representation from Hu et al. and add the projected one-hot encoding classifier type feature (highlighted in green box). We are the first to use this representation for facial recognition tokens in addition to OCR tokens. See Equation 2 for more detail.
42
+
43
+ ing special tokens into image captions the *special token approach*. In our approach, there are two modalities that hold information about an image. The first modality corresponds to generic visual features (yellow box in Figure [2\)](#page-2-0) which are responsible for informing the model of general context (all vision-language models have a visual modality). The second modality, special tokens (red box in Figure [2\)](#page-2-0), is responsible for informing the model of specific terms that are relevant to the image. The embeddings for the first modality are calculated from visual features from an object detector. The embeddings from the special token modality are calculated from visual feature vectors (Faster-RCNN and a bounding box), textual features (fasttext [\(Bojanowski et al.,](#page-8-12) [2017\)](#page-8-12) and pyramidal histogram of characters (PHOC) [\(Almazan´](#page-8-13) [et al.,](#page-8-13) [2014\)](#page-8-13)), and a source feature (one-hot encoding) as shown in Figure [3.](#page-3-0) Additionally, special tokens are made available for direct copy into generated text which allows for zero-shot inclusion of words not seen prior. This structure has been successful on OCR vision-language datasets.
44
+
45
+ The key hypothesis of this paper is that a model can learn to differentiate tokens from separate upstream classifiers. Subsequently, the model can learn to use each token type appropriately in generated text. For example, a caption for the image in Figure [1](#page-0-0) should neither say "A screen that says Bernie Sanders" nor should it say " 'this week' standing in front of a screen."
46
+
47
+ As mentioned in Section [1,](#page-0-1) this work demonstrates using two types of special tokens, OCR tokens and facial recognition tokens. We focus our experimentation on learning to integrate facial recognition tokens by training on the PAC dataset. However, any set of words that can be identified by some classification or recognition module can conceivably be a set of special tokens. We leave integration of more upstream vision classifiers for
48
+
49
+ future work.
50
+
51
+ The goal of the special token approach is to integrate vocabulary tokens from external sources into generated text. The special tokens approach is based on several following observations.
52
+
53
+ - 1) Different machine learning architectures have been designed to perform well on different tasks. For example, tasks such as OCR detection and facial recognition, benefit from specialized methods that differ from traditional object detection. OCR recognizes and combines characters rather than directly classifying entire words or sentences. In facial recognition, a regression model is trained to output face embeddings which are subsequently compared to embeddings of known individuals. Even in standard classification tasks, significant research is put into fine-tuning architectures to get state-of-the-art results on dataset benchmarks. Such work can be leveraged by a captioning model by using these classifiers as upstream sources.
54
+ - 2) The space of all possible vocabulary tokens, when named entities or proper nouns are included, is intractably large. By appending special tokens to the vocabulary at inference time, the captioning model's vocabulary is prevented from increasing vastly.
55
+ - 3) Using non-generic terms does not always increase the syntactic or semantic complexity of the caption. For example in Figure [1,](#page-0-0) the name 'Bernie Sanders' is a substitution for what can also be a generic term such as 'man'. If a captioning model can generate a caption such as 'A person standing in front of a screen', the same contextual understanding should be able to generate the caption 'Bernie Sanders standing in front of a screen.' The model just needs to know to *use* the named entity 'Bernie Sanders'. The special token approach takes advantage of this by allowing the model to
56
+
57
+ learn representations for *types* of special tokens. In Section 5.3 we show that our model learns to represent different token types in different sections of the embedding space. The model can then implicitly associate sections of the embedding space with related generic objects.
58
+
59
+ 4) The desired vocabulary may not be constant. For example, after an election cycle, new politicians become commonplace and a captioning model may need to adapt accordingly. The special token approach is highly practical in this sense. The captioning model does not need re-training, only the upstream facial recognition model needs to be updated.
60
+
61
+ We utilize the multimodal multi-copy mesh copy (M4C) model introduced by Hu et al. in order to copy special tokens into generated text (Hu et al., 2020). We are the first to utilize this method for tokens other than OCR. Here, we formalize the differences between our captioning model and the M4C captioning model. Figure 2 provides a corresponding architecture diagram.
62
+
63
+ The input modalities into the M4C captioning model are object features $\{x_1^{obj},...,x_M^{obj}\}$ for M objects and OCR tokens $\{x_1^{ocr}, ..., x_N^{ocr}\}$ for N OCR tokens. We generalize OCR tokens to special tokens st such that the inputs are $\{x_1^{obj},...,x_M^{obj}\}$ and $\{x_1^{st},...,x_N^{st}\}$ for N tokens in total. M4C captioner predicts fixed vocab scores $\{y_{1,t}^{voc},...,y_{K,t}^{voc}\}$ where K is a fixed vocabulary size and t is the decoding step, and OCR vocabulary scores $\{y_{1,t}^{ocr}, ..., y_{N,t}^{ocr}\}$ where N is the number of OCR tokens. The selected word at each time step $w_t = argmax(y_t^{all})$ where $y_t^{all} = \{y_t^{voc} \cup y_t^{ocr}\}$ . We substitute $y_t^{st} =$ $\{y_{1,t}^{st},...,y_{N,t}^{st}\}$ , where N is the number of special tokens, for $y_t^{ocr}$ such that $y_t^{all} = \{y_t^{voc} \cup y_t^{st}\}$ . Special token vocabulary scores $y_{1...N,t}^{st}$ are calculated by combining linear transformations of the decoded output $z_t^{dec}$ and the decoded special token representations $z_n^{st}$ as shown below:
64
+
65
+ $$y_{n,t}^{st} = (W^{st}z_n^{st} + b^{st})^T(W^{dec}z_t^{dec} + b^{dec}). \quad (1)$$
66
+
67
+ Several types of information may be important for determining if and how a special token should be used in generated text. This may include information about where a special token is located in an image, what the token looks like, or how the token was generated. For example, a known person in the
68
+
69
+ center of an image is more likely to be relevant than a small segment of text found on a sign in the background of an image. Several features are used to richly encode these features of each special token. Hu et al. use visual, spatial, and textual features to calculate OCR tokens embeddings (Hu et al., 2020). We adopt this representation for all special tokens and add an additional source feature to differentiate the upstream classifiers used for identifying special tokens. A formal description of the special token embedding calculation is described below and a visual representation is provided in Figure 3.
70
+
71
+ Special tokens are represented by a feature vector $x_i^{st}$ , where i = 1...N. $x_i^{st}$ incorporates visual features, textual features, and a source feature. The visual features include a bounding box $x_i^b$ and a feature vector from an object detector $x_i^{fr}$ . Following previous work, we use a pretrained Faster-RCNN with a ResNet backbone to generate $x_i^{fr}$ from the RoI created by the bounding box of the token. The textual features are a fasttext (Bojanowski et al., 2017) encoding $x_i^{ft}$ and a pyramidal histogram of characters (PHOC) (Almazán et al., 2014) encoding $x_i^p$ . The source feature $x_i^s$ is a one-hot encoding between upstream classifiers used for generating special tokens. $x_i^{fr}$ , $x_i^{ft}$ , and $x_i^p$ are concatenated together and projected onto a tuned encoding dimensionality d by a learned linear transformation $W_1$ . Additionally, $x_i^b$ and $x_i^s$ are projected onto d by learned linear transformations $W_2$ and $W_3$ . These transformations are trained during the same time as the captioning model. Layer normalization LN is applied to the three d dimensional vectors. $\boldsymbol{x}_{i}^{spec}$ is a result of element wise addition of these three vectors after layer normalization as shown below:
72
+
73
+ $$x_i^{spec} = LN(W_1([x_i^{fr}; x_i^{ft}; x_i^p])) + LN(W_2 x_i^b) + LN(W_3 x_i^s).$$
74
+ (2)
75
+
76
+ We do training with decoding binary cross entropy loss $\mathcal{L}_{dbce}$ such that the model is supervised at each decoding step t with binary cross entropy $\mathcal{L}_{bce}$ .
77
+
78
+ $$\mathcal{L}_{dbce} = \sum_{t=1}^{T_{end}} \frac{\mathcal{L}_{bce}(t)}{T_{end}}$$
79
+ (3)
80
+
81
+ where $T_{end}$ is the number of decoding steps before $\langle end \rangle$ is predicted from the vocabulary. A max-
82
+
83
+ <span id="page-5-1"></span>![](_page_5_Figure_0.jpeg)
84
+
85
+ Figure 4: Samples from the Politicians and Athletes in Captions dataset
86
+
87
+ imum number of decoding steps Tmax is set such that Tend <= Tmax.
88
+
89
+ At each decoding step, sigmoid activation and binary cross entropy are applied uniformly across the fixed model vocabulary of size K and the vector of special tokens of size N such that
90
+
91
+ $$\mathcal{L}_{bce} = g_n * \log(\sigma(y_n)) + (1 - g_n) \log(1 - \sigma(y_n))$$
92
+ (4)
93
+
94
+ where n = 1...K+N, y<sup>n</sup> is predicted value, and g<sup>n</sup> is expected value.
95
+
96
+ With this paper we create the Politicians and Athletes in Captions (PAC) dataset. PAC is imagecaption dataset consisting of images of well-known individuals in context. PAC includes 1,572 images and three captions per image. Samples from PAC can be seen in Figure [4](#page-5-1) and additional samples can be found in the supplementary materials.
97
+
98
+ We create PAC with the goal of studying the use of non-generic vocabulary in image captioning. The non-generic terms emphasized in PAC are person names and OCR tokens. The PAC dataset offers several technical challenges: 1) correctly identifying people in a variety of settings, 2) reasoning about the effect of the *presence* of the individual. If a known person is in a scene, the description of the scene is often based on the known person, and 3) natural integration of a name into a generated caption.
99
+
100
+ Images were collected from the Creative Commons image database which are made available under the CC licence. To find individuals for the dataset we searched for 'famous athletes' and 'famous politicians' and selected 62 individuals. The selected well-known individuals are of various races
101
+
102
+ and sexes and are from many parts of the world. For image collection, we searched for each of the 62 well-known individuals and selected images by manually filtering out duplicates and images without visible faces.
103
+
104
+ Annotators were instructed to provide a caption of the image including the name of the individual which was searched for when collecting the image. Other famous individuals who happened to appear in the image may also be mentioned in the captions. Additionally, annotators were instructed to use scene-text if it improved the quality of the caption. These annotation instructions differ from those for caption collection of previous datasets. For example, in the collection of MS-COCO captions, annotators were instructed to *not* use proper nouns [\(Chen et al.,](#page-8-14) [2015\)](#page-8-14) and annotators for TextCaps were instructed to always use text in the scene [\(Sidorov et al.,](#page-9-5) [2020\)](#page-9-5). 658 images were captioned by college students and 914 were captioned by Amazon Mechanical Turk. Captions were scanned for grammar and spelling errors.
105
+
106
+ PAC includes images 1,572 images with 3 captions each. All images include at least one famous politician or athlete. Overlap exists in several images. 62 different individuals are in the dataset for an average of 25.2 images per person. 23 of the individuals are politicians while 39 are athletes.
107
+
108
+ Each caption includes the name of at least one person name in the image. In 66.1% of images, there is scene text that is recognized by Google Cloud OCR (not all photos have scene text). For 35.9% of images, at least one of the captions uses scene text (as recognized by Google Cloud OCR). In comparison, 96.9% of TextCaps images have scene text and 81.3% of captions use scene text. In the PAC dataset, 96.3% of the images contain
109
+
110
+ <span id="page-6-1"></span>![](_page_6_Picture_0.jpeg)
111
+
112
+ ![](_page_6_Picture_1.jpeg)
113
+
114
+ ![](_page_6_Picture_2.jpeg)
115
+
116
+ ![](_page_6_Picture_3.jpeg)
117
+
118
+ Figure 5: Captions generated for PAC test set images. Red words indicate tokens from the face recognition module and blue words indicate tokens from the OCR module. Corresponding metrics found in Table [1.](#page-7-1)
119
+
120
+ a face region of interest (RoI) that is detected by the RFB Net [\(Liu et al.,](#page-9-14) [2018\)](#page-9-14), the face detector we use throughout this work [\(Sidorov et al.,](#page-9-5) [2020\)](#page-9-5).
2207.10883/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-11-12T01:17:50.108Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" version="14.6.13" etag="3F8pWTFCt95D5MXP85aB" type="device"><diagram id="ZEjlUc8P7Q3sEmhrvP9v">7V3dkqI4GH0aq3YvtAgQIJfT3TO7VTsztbWzVbtzybS0soNiIT2t8/QbhCj5URATCGBfdEnECDn5DicnX+LEelztfkv8zfJTPA+iiWnMdxPraWKayHPx/6xgnxe4wMsLFkk4z4vAqeBL+DMoCo2i9DWcB1vqxDSOozTc0IXP8XodPKdUmZ8k8Rt92ksc0d+68RfFNxqngi/PfhRwp/0TztNlXurB0tm/B+FiSb4ZGMU7K5+cXBRsl/48fisVWe8n1mMSx2n+arV7DKKs7Ui75J/7cObd44UlwTqt8wGzuIx0T+4tmONbLQ7jJF3Gi3jtR+9PpQ/Pr8mPIPs8wAenUz7G8aYo/C9I030Bm/+axrhoma6i4l18acn+X3xgkIOv2cEMksOnXfnNp31x9BKv0w/+Koyygr/DFe4BpvE5eMP//4pX/hqfkt9NdgtUe2zj1+S5KHKKW079ZBEUbeQgvt3AEQ3ci4N4FeBLwackQeSn4Q+6fr/oT4vjeacmxy+KVhcjYOVV/PCjV9LnZkCIykf/Gw4jqin9KFzg+356xpcdJLjgR5CkIe6o74o3VuF8fgAtCbbhT//bob6sMTdxuE4Plw0fJvDpmubNviPYiQKtqL8cN6cGNcUNumNuunYDF1X9md3IqZ6pQ38ifnnZYphZQI4XUAsje3xRgvgocZ3uogSOJUrsi1EyNWYmsAvK2FOV3xg3poKwcUYXNiREymHjWd2FjTuWsHHqh42GgeKNLlBIUJQDBQlwbStQ0FgCxatQYQb7eJlK0mU2XauKMCLDwxKKPITr+btsBJjBFfnbbfjMhMUuTP89BQI++lp65xQT2QEJiVMonaLnKxU84lCiQnjub5fHg22axN+DxziKk8NlW4aB8J+08EMeH35kYFQj/N5Og14y5l2Wxruk7NYucxw2k05jG3Qd+S0VHyuPcrmaLKamY9WkqrwluKpwX/H3pdOKgL1wzSbzTRY1AMcv8iob93GgC1NFWfUP/vP3RRK/ruelzorcJ8N1WyAzYEyE/fP02DeOIxmpapmMMvfqNAEw72R2A5nVd3R6T2ZAEZW5iplM6HyJpfAIqKzKHjM9IJZQN3ZMQFUKVTCZLY/JQInHTqw2ZCa7wnVrick8hhecxkTGaCVTEZGx4s+WTGQic9LuhMiUs9RlexLfNiSe0429jB6EKhFYzp2WmtPSFa5m72gJ0DVZkmmJXCCUTEMis9d1B0lDl+1eHFieJYeGTPU05N1pqDkNXeEZ946GmAEjO16UpI4gc72uZFYSOeveMMVRlbfuOLRtJMlap6WSrYCjyN1K8aIkcBTQl6MEeRP6GOtHw6gpJyGaLKCiARs7MERyOckUeehokJxUSio845ADGzG+EpTzIHTpWpXwkqmIl8bASro55HDmQddzbMt1TMdDFqT6j8e8zXSn2gzGyDNWncliMEZVSRZVwqRRCAdJYGYlgbmQyViQk+hD85elgr4kGuP0FN8YCEw7Y5wRLV5TkeUKuaM5Rd3SQ0XWtosGSTRV1raFSJbYjfJbySSbqcrNHgGTaOdlWzaW5SawoGNargUMR40Wci4TliQthITfIk0KiVxvT5x53neGqk5yRowZdKMJLpehNDO6+8VR2hndNh3WzT0lKPR6ZLMQcMTXK42HRD63IV44Nvxspoo084PtREY/cqhKOF83VaG0yHSwLma4xhN2rmacBUwwM45/wGEz6prP1VlICWtNCa0omp2zRE74MFOXrMtOuDFzvFvHd8cnGoWZEg4y7xx0HQdpbX1zi0YaExGbGq4oN5xNGnAk05LI33Y6EVPKacmsoCXoMY+A3qxgsWTa26MjKe3sbXkkxVR0rFgySbHrVySnfVvCPSnGuhTPqrLPoWMzGQGSiEzFzJwl0U0f+OI7AXVp56cDW0wFTZIyaeqSnRtOLpj5Htlr7yyRa37ck2x03FWVTm4DMqNwa+adCq6q4atnbbtpoSXPRDkx59heTRREiQUsAQuwQdZomzXen73E6Ot4HdB9Pms6siMLuIk+qa53xihtJxGUbdhKGqyguQa9lzDxHZdyhgC6EpcdjeeZaiSixvuGI0aNjDdm8GLz10WRrceZQWU4mnccefElB0a2GqAQRt4vu8NompLikauITBspAJK3zN6vvgXzebhecIhigZaq0+XZjnHbfPM4fhxZ9KBb5OQVshEyDijkZaMr6EesrdRENtq8ycNHVhSFm23WHm/LMA2+bPzD4Pgt8TcMPNtNvhv1S7jLBugPL2EUMU3Kjdc/fEDo8bG9xjYQ09uPdl0LKt3mfZfP8XqKO/MqXPvZmRPTibJe/y3BrxbZq8/x/NAUv7wkPm6UX8cbJxaqBA6pAo43FP6uAdpYkWKsacQD5agCSkIWXrtu4zXcxfgLwIYzkmZUU3RcoS8YN8OxZy5Epb+GagNeVa1E7cE7JDu+a7Qz40q0qNmCZ2WfUak92PAU8uZJOo3C9fctz7i/vIW4/2dNsk1H/Ixk0kBrSkkZzAuv80yEzNsGiwLEK0CAZp6pikc5xWkiOUxqX1mxPC6FIl8lj8iQBOSOlODKjoVdEa5KgoW3buJdj2BVzK1A3lhZ94Bgu+FSYni1waUSU4vKa0nqzs5PrIbrSNT9hpF2+UKsGr96sumMIJ6yvo6iNfuScxqhKF1I0GsHwPeXU4GyxR8WY47LWfxhM7JFReY1lJAZJBglZ383ctINq9vUsZJ2qUCs9cKSSe2p1jP7/MvfC0np8lkoSgSyBslKlUk+iNnyQ84+SGSJDeknngpSkmDqjYiUtFtLOwVAvKS/6DT2mf0Arp6SBmL3QzFnyV5rC3mnEofvIEmraktJFxkm/TyT0iEZIaViaE0Gj/cU6yoKE6VYa0dh0lKsWY5StIINKP51E0e0stYaJEk5VStrb0+fZtZ6qJxTIYJhNBkfUyBIw7YFTMFGdBPjzqmRqaZHsvoUQFq58+4m2WxHtrtJVrpqls6XY9f6A0afXHWnTjLYHZeati7N5TXtkyaoXefYDRw1IvBnLrzY/nVh5Cpy1aU5O7xZdUcSgJkcJLmKTnnPCqC8zrEaCZTuzJSEJVcTUAdlnZV1gxLMXnt6mYxSe6CX3a7ksssPtXXgEvfMsFhTWSafGNw6I+k7LprJZVfPtWQdodZnuezqaSN0i2Q/5bKrp/PQMZS9lMtuHTtiEHK5DXlcwxHQQx7TS/NaFMd6DrRz4HojwhTQgJ770fQMl7bFMRn531Hruzj2NDUNOkWyn+KYrEy7Q9l7cezVMR8GIY539AOlBa3s1bAD9NDKXldi2dNzmO2dWVCjqShTQAt6TuH3DJfWxbKe0/UdodZrsaynidAtkj0Vy3r6Dh1D2UuxjOqYEUMSy20mKqMa/oAeahk4XcllpOfAG3WzEEYfuYz0nMLvGS5ty2Wy5vOOWt/lMtLTRugWyX7KZaSn89AxlP2Uy7wd8Uewn35Jgw0u5WE9/BzNcZFvhXw+8ys29fc2PPSTYgcvo6b8foCuhZ4kaetzv+5W7lIe36VkbLKNeHuhhAwYOTJTQH7hpwtoeLugBA2/UczIoKFHXoJtnFXhQrbUOANMw80mx4Uev/Eo4sfO6gAUGQw5Rq8EoI/BAouLEnSvZ6EbzQa+ZA8FQoaCGVSR9GmAGT5M4jgtCw4cBMtP8TzbreH9/w==</diagram></mxfile>
2207.10883/main_diagram/main_diagram.pdf ADDED
Binary file (44.6 kB). View file
 
2207.10883/paper_text/intro_method.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Imagine showing an autonomous agent how to prepare a sandwich, and it learns the steps required for it!
4
+ Motivated by this vision, our work focuses on developing a framework that allows an agent to identify the steps required to perform a task and their order after observing multiple visual demonstrations by experts.
5
+
6
+ Given a set of instructional videos for the same task, procedure learning broadly consists of two steps, (a) assigning all the frames to the $K$ key-steps (including the background), and (b) discovering the logical ordering of the key-steps required to perform the task.
7
+ Procedure learning differs from action segmentation as it aims to jointly segment common key-steps (actions required to accomplish a task, as shown in \Cref{fig:correspondences}) across a given set of videos.
8
+ In contrast, action segmentation aims to identify actions (unrelated to their relevance to accomplishing a task) from a single video.
9
+ Furthermore, procedure learning deals with additional or missing key-steps and background actions unrelated to the task and identifies an ordering of the key-steps.
10
+
11
+ Existing instructional videos datasets majorly consist of third-person videos.
12
+ Here, the camera is kept far from the expert, to avoid interference in the actual task.
13
+ Due to this, the manipulated objects are typically small or sometimes invisible.
14
+ Additionally, third-person videos can be captured from various positions, leading to wide variations in the camera viewpoints for the same task .
15
+ Further, most datasets comprise videos scraped from the internet (YouTube) , which are noisy and have large irrelevant segments.
16
+ In contrast, egocentric cameras are typically harnessed to the subject's head and have a standardized location.
17
+ They provide a clearer view of the executed task, including the manipulated objects.
18
+ As a result, recent works have introduced datasets consisting of egocentric videos , which have proven helpful for various tasks .
19
+
20
+ Motivated by the advantages of egocentric videos over third-person videos, we propose an egocentric videos dataset for procedure learning: \dataset.
21
+ \datasetsp consists of $62$ hours of egocentric videos of $16$ tasks ranging from making a salmon sandwich to assembling a Personal Computer (PC), thereby ensuring diversity of tasks and facilitating generalizable methods.
22
+ However, egocentric videos come with their own set of challenges.
23
+ For example, the camera view undergoes extreme movements due to the wearer's head motion, introducing frames unrelated to the activity and unavailability of the actor's pose .
24
+
25
+ To overcome the challenges and learn the procedure from egocentric videos, we propose utilizing the signal provided by temporal correspondences across videos.
26
+ As shown in \Cref{fig:correspondences}, critical moments like putting a slice of turkey on the bread while preparing a turkey sandwich are present across all the videos.
27
+ To exploit the signal provided by such temporal correspondences, we propose a self-supervised, three-stage, Correspond and Cut (\meth) framework for procedure learning.
28
+ The first stage of the \meth uses the proposed self-supervised TC3I loss to learn an embedding space such that the same key-steps across the videos have similar embeddings (\Cref{fig:correspondences}).
29
+ The second stage consists of the proposed ProCut Module (PCM).
30
+ PCM performs clustering on the learned embeddings and assigns each frame to a key-step.
31
+ The final stage of \meth creates a key-step sequence for each video and infers relevant ordering to perform the task.
32
+
33
+ Current works mostly use frame-wise metrics to evaluate the models developed for procedure learning .
34
+ While these metrics evaluate the procedure reasonably well compared to simply calculating the accuracy, they do not suit datasets with significant class imbalance.
35
+ Furthermore, procedure learning datasets consist of significant background frames .
36
+ Hence, a model assigning all the frames to the background might achieve high scores.
37
+ We propose to solve this problem by calculating the scores via the contribution of each key-step, leading to lower scores when models assign most of the frames to the background.
38
+ Further, when comparing with the previous works, (a) we use \meth on standard third-person benchmark datasets and (b) employ existing metrics to evaluate.
39
+ We show that CnC outperforms the state-of-the-art techniques for procedure learning (\Cref{tab:table_third_person_results}).
40
+
41
+ \mypara{Contributions} The major contributions of our work are:
42
+
43
+ - To facilitate procedure learning from egocentric videos, we create the \dataset dataset. The dataset consists of $62$ hours of videos captured by $130$ subjects performing $16$ tasks.
44
+ - We propose \meth, which utilizes the proposed TC3I loss and PCM to identify the key-steps and their ordering required to perform a task.
45
+ - We investigate the usefulness of egocentric videos over third-person videos for procedure learning. We observe an average improvement of $2.7\%$ in the F1-Score when using egocentric videos instead of third-person videos.
46
+ - The \dataset dataset and the code written for this work are released on http://cvit.iiit.ac.in/research/projects/cvit-projects/egoprocel (mirror link).
2207.11761/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-10-20T09:37:09.720Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36" etag="_R768bdhkRldS9UCyjhR" version="15.5.5" type="onedrive" pages="3"><diagram id="IXcucw_Y5yGrzUrozrWR" name="Page-1">7V3rc9pIEv9rXOV8sGrej48xjrN7e7nbSnK3yX3Zko2MlQXEgmwn+etvBiQhaQYsjB4jLJfLoAdj0b/unu6e7p4zPJp9f7/0F/cfonEwPUNg/P0MX50hBDli6kWf+bE5IyXdnJgsw3Fy0/bEp/BnkJwEydmHcBysCjfGUTSNw0Xx5G00nwe3ceGcv1xGT8Xb7qJp8b8u/ElgnPh060/Ns3+E4/h+c1Ygvj3/SxBO7tP/DJncXJn56c3JEKt7fxw9bU6tvxx+d4ZHyyiKN+9m30fBVBMvpcuGAtc7rmYPtgzmcZUPMPHt7xENrn59un78+mG0+lV+/nmRwvPoTx+Sb5w8bfwjJcFjsIxDRZF/+jfB9PdoFcZhNFeXbqI4jmZn+DK94e00nOgLcbRQZ+/j2VQdQPVWffWFHmz2faK5xLvxV+Gt598+xMGf8TL05xP9vy7XXAM8qt7ehdPpKJpGy/Uj4Lu7gN3e6pHiZfRXkLsy5vIGAHXFJEdCIf10wffcqYQ874NoFsTLH+qW5Cqm1JMUSAYBwgIgmRAnYV1IIfSwBIILpm4RYnP1acsWCAgPSEix4hDKMGHJLfd5DqHMA4wiDiXTLwm9/YRPJ9lTbaFUbxI07cj+/Z9/fvpy8dvnB8pmD7+8+8flZPGPC8gMIJfRw3wc6A9pgkXL+D6aRHMFbKQBWyP1LYjjH4kU+g9xVMQx+B7GX3Lvv2Z46aOr78nI64Mf6cFcfZsv+YPcp/Th9mPro+3nxm+1/KrDeTQPNmeuQ02Eq714r6KH5W2whzIkUSP+chLEe+5LCBiMCxrC5J5lMPXj8LGoMOrHUxp46if7lBwmNKoNYnAQxF1DBalTWCHyerGqD4Lko79HoXqUTEkjIgpaGVHp0eIgG3ZJPleCMnuQl6NLaKvovg5lK5wSYGJYRTNogj6dKitUE+vpPoyDTwt/TYknZeIUkSwZMzdwPL4DNmMGAo6lHvAumscJYyCQHF/7s3CqAfsczpRFjMC/gif192M08+f12D9IFEWLZ5KVs3EIMG2a9FztOJjW6bg+HCoYld3ggFkRBwg7B0KYQJiOwukDITsHAvJh8qk8+aRBhJ6Z+sR03dzBuGmsmrYfyzKdhk3ash7T75m3LepTpb2xLSDtXpUKh8XMOVWKKqpStwx5SzjTHU88wRj2DmOInQI5fe68SsWvT6Ui3LlKRRbvzBmV2pjY1C4NOyJf5SmUtx35Sr9pTtD8Gt1AcRvY3cAbQQntzg1EwDnbBVn0mzOC5pztUnkZAbk1r5lhSL++ea034ubCvAYGcasubrSquNXu8h8nbtQUtxqjzX0Rt+6j/liaQJw+DkIWYBCgWxCIJU5VGwjO+lQlECDrGgVooDCuTxRcXXcpoZCRtzMUzACDfxsrYpWRUN8vLpLcT3LibhU1AkVdI1luFo7H6+y3ZbAKf/o366E0qRfai1t/E3p5Rq/0WMpgWG3AgAZqySSeAwyK1gCDuJgqJ8wZhFoAw40BZjqqs+gxDAbAMsBgSdGZiLFWETN9nXG4DAYpy4OGaEkvmqDxNkHjFbKHJ8orWpRIxuomWZb/nSB7lk+xtpGSlGNpwJxirHMMbIyY1JSAM8SmmtNvlAywSZyRpIXk7K33ms/QfoEBoUyT/yb/ycRzP1dVz9MABTAt5gKUFixFY1CanmQP88dqAgcWwUEd23LU1Fk9XH+vCRsEHAOHm+D0byWvGcGxTVFNgYPozW/h5P1YfhR/fHv7MX7/CT1cmDb1mU50J2d0tLgPz8/45f2f6o8Pz7i6ONKPpY/HyfHmAK0PjGmNX71JBnve2LNbZDmok1PVjUIbGxWDuRX4oGwTNsYXFHplQ9Cc7zBPo3h53qBN8YbNELTxBirxxvp4AL86+LzoaPPuoTdV9gbMvLyfDLioUTsWFsFNoxjtgGv/DjZ0y9OxsZI09lf3awLDIr76/O9+rDCbr88ggDPU0+paZM7bV+j6Gl/tcwifXUF6JhCVnjsyQwNKXNTLghWH2KydGfkZzw7ES+M0nOfBzXT/AfWdqJPybPxS1MsDtYw6BO2WmLqwyn0wzz27+i0SyX0+Ubb2gsnjKnxM835xRMyiVherhJJ8O+KX1/XEVCUtelkCdL36Lcw44OMxwaPDyW1gd30t5WhkAnF9PRq9YzUFtwEqLUkQ2DUUELy+EoEGNKLoqUY0raDFEZHCvmpEyDpXiRBa6D7I4YFyKNP+ST2Tw/S583J4RFC4r3KoxKt7OXx9ebANyGFVD0FW9XNbksNGqz8COKYBt8mnZBz77GXyWYMcilLItd350BpyhRb1lxPDrcS9257tl1QeGJUlFQT5CIGtmtDumsBaEtrrW03tjcC6MHECl1s69Gbi5FUNWMfqk8Hra9rSJfputSCQ5hJajWVFvdHC3QdWpRnPqa+syFkcOCsFuGHHKJjFXYuXo9AXZ76MAmQdw5Blk+WXGY6Qhp6sMpRxQKRzHMzolprZ74eyhy1mkhQ1GDMga7VUBQJkqjB/EQyIbRErpq9A1jlkZuzqMZg/DAVhuyBDpHPIbPUw5fmptSSkOsuXHMlbErQUJxG8OETVvKXyQLw0TuN5S2acq0NOcR12CASoB3djpLaBt/WjegWuNQSymCjYvW8NLduNNNpoMQtrfc1deVnntzZU/vNNc6o3qXJsswtoa1NlUcAnL4Xd+/S2DkavYMHHQKJ7rx46lUF/gsarojAqWTHihVZMeSReGqg+K+Yq/P0Liz9O//f44cfkC/zrJ/l3kHX0r9JEwV8tNpvc3YXfNaM41lUBorSANyUlMeQQC+Fxky0Ufh4VYPtDmhJNS2Mrg96dbHl3OJR1hFD382R1Hcw9TLZb55U6OGHsAZhtnMdTh6P7rfPs396M7/XQragLWEA8QChXyBEkIMfFYKyJJCYeE1AqRAlngHILkJtbIIeUbF5YU0ieRMrasUim9imHHseUMiKofjFdRqega7T5bV+gy/p0UI9IoOZOiBFg5UJBZ5G1Zy/aJuB8swA6Wj3Mztf14wucax7xmO8ksb7y5hU0DqiJhS6g8CTKGVi8tEYj1BxtMcDyc7QANtONsHIb+vpYxVwc3c0qcD+r6LYTdP1r6arF/JmGb36z0i+FYdHAgfVwoFQWXlFvEeABILc/5pJToyxn/16m534+flMB3g7XCdf5zy2tE/JSqEWa22DbQi11rBPa8TLzic6DAa9tFRFzCy9kzv7ndwNe22gUcA0w0xs+nwyA5VofA4+g7SQmHIPPdIHP/QG+bXijPJ+ZEY128TL93vObAa/tJljSMbzMNbfz2wGvrFkh606+gulHePf1YsxuxmH8BK7+/e7dzQWqsDK3c7llGcV+shSgNP7LgvbVifqS9RfqYUkgAIJKgkumBCTCw5gQwKSgBBBk+ltKHXoIECyIhAoDSC2BIs49DCnGQBA194k0jnAMUkx8+3tEg6tfn64fv34YrX6Vn39a+o0e1Qfn4NbVdWirUn9dSBn0sMi5vJWEoY51aiuJqyxTH5+zU0dhrvXxLXW5+zip/YSZfU9dU6Otw3sc12E1oTJfI6f42lJkZFK4ifS0xlhdVmR12Vnx477H7mzD0Bp4nTiuwi2FXD3n9TTq3zdmt6xWtLtfZwPczohj3G5Go/rO7ZbE3312gyvMblmfr9E874TZiXCK2ZHJ672bPXHZUlwrlK4Ja0kL6puiplR4sNyZygXanoJesCqGrglr5gn3Lh5CCPIYxZhIxjnBHJfymKjUdMYUEkaIejEDVB0lMtkRqSmUiAFtNi6ICfNA/qekNyjjNvZWroXH8h9DJrHRVjDqp6+lVUirkZJ9oF0wu0zs55QDIocFiLAlG0vbKlSJESWEc4HFDnRs99Qfq7V0E2lVOXUIlWW7QbewMd3SY/pY90qMGCqI0YXzclTBvU1LS8KZrz3BPDb2spRnq1mm+sKlf/vXZO0nFyRP/6hb1v/sbQqkVeKS57m6j+OFostbTQl0fTueEy9UE9ZdqDzwpXer/iO6Hvuxr170+ZV6fdJb/aq3F3fLILh4XG/7e5Fe1DS59sfjC4iEt5hPWmUgWrRQMrc3P01SD1MGIZRS/+FUmgy06576Gci2S9/AQN0xEC6mWVt22HSKf4g5Wey0aXNEhA1bsESAtQWLoBCQIlY2YLH01PnsslkAASHwpFTXKUYCSGLT8oJ5iEpJGcJMEn48he0FiPiAXbOfJ/F+FI8nfFH/MTN7Yx2HwIoxGSZCMWnq0xXYV6r5kxPlSGNEgWCwBuLa1V+FpkFVXLJDdEJ9xE73UsxyZUw27tglw10vXu+1JXc50vtZ5YAa15IxiRw3Jk1F45BT1jBYzntlZjjPIa+sYXDSTilpdMOcr93CqkKgbzCq22QgWqxCT+dbV63qdD4eGMgVBsJFDZR1CnCVg+gBToMBoUnF+vwDID2SN0jLnV+5deWsazuWOjf7VuT0jA+e53RXZ1NqzqYdm6X10z7VMlJxufKQdciBSVQSDh3SIHouo3S99skBMSXFLegsK3KnJjZZQ2RP11NIQRCFklmajjuFDKuQBTpYGHkL44AIz+Hcw7AHKORSd0TDDIpiFE13w5IcKmtBIiEFpZZ4pUvGB6sQFB6YqzXmUvyzZiulnzAFmOFSxbinZhIEMZGQEo4Iddy0Tddo3eiVudNWdqTxJRLlhErBbEmrh7bBNMYlfO+49XXFtDNFhXXMTro0dpE2hyn1JN12V5Sl5S8upF4dy/orWqzIbtor2qE112h6l8hoFHYq087LyUrFrg+NJYuyClsiOFwTkS4zPlsTkfKSI0URzBJW6H1tJ6dOsXa2m+bplACxqtWd2T5GrvD7CdZ3OqbJITi5WmYOKrO7W+qdm0GW3tUNldldQqfYnVumy2Gr5EMFDFU1n7hb8mVGAvpXO+a29cQtpY99m7GNmlIJTY+3Xao22qy8u4LS7glr2dGodxrBVAldU7XrxMBaqLq/lFQwHRPblpI60xPfjsgxmwC5VEoqiGUtq+NkD24mHDiUtHxIqjl/wc4RBXxcr3/jprPrUMpys1C5nrEsTO/XoYzlZrHpWx2pGFbsmywDPJyB+lZHmnLmwECOMFDP6khFhfV79+pIBeQeQds6UouQdlFHaqfwAaWOLywjzUB8bWWkosLyed1lpPUR2/0yUmEGIBzyyA4qPclY5WTLSEXXjce6BMt5p8wS26gv8uy4JPWsjFSYwQ0DqcGmbpOBelZGKociD8cYqG9lpPKA9j4ulZFKS2Vc10asNEMk/SiGk9UX21ydSmXXmwO1QPsTrSGVZtyiY4u1Meh6VkMqK0Q9BvPihWV+h3PPadWQygoNLAbmao25TqyGVFapCRlqSHfVekqlayxJqkfXkHK6d9yGa0ghqODt9Kt0IqVgz7Y8lJYoZauu0k6KH1EXx4BTuwVBUKFLXb+4PSt361sdaPbgJ8XwrlXGQdAOf9dWKwTPjqsUOkaSqu4y55wkdR0uaEKSHCu6g5adKw0Sv6JuG2xvtw1l5noEZ902TOxcarYBITKg7V2th9FtQ0fWXJIfenKOBqy8LWnGYa7MGJadSfvfhMM1jk9XGU6J41lljnfMRoJmbLV3Rb3lIknXGJ4OzkZ1SaoapXJPkswYSu8KucuShBnwmEOSBE9v6khn6/4xPDJTyvpXYF9ieCiVg4bcYXhkege9m55x+pFMq1DtBXdNWdO57Z26plQafTfcIK6lVUzvVAMpse1aN3RO2RPogEoI3tt/AwNN6L5s5Q5RTRVfLTTg4HsbcCBBbAzedbIsRE6XfB1SqLfllVPtwqGc744VVJdguV7yBdHJ9LN5ATp968QBUQV3d0h8bJOF+taLQ9lSAwu5xUI968YBsRmC2GnettqOA+5rx4Eo8wR1blv3HSQ2YxFHkPgZGF9bQw6IzWDETurW1ZGjRnK735ID4q6XzevrJLDllpNtygGxGbBwyEVrGi7nfTTsdLijaXh61pgD4grlQ4OB3SoL9aw1B8QVqi8GFmqVhfrWnAPiCqGiTrpzKC9rX3cOLKyLap2btMSMm3Q8CVdmdrkDOxMjZydVYsYc+tGj4xDqn2iTDkjMaMbpiU4/23RAUiEWMpgaL2yl8AL+Oa1GHZBUaCw8sFd77HVirTogqZBqM/Tq2L0vu7Kqm9jvHcq94zbdq4OaBse5/8ZgDCUzcZEB/ETP3CoAg6VFAc3C8XhdWLoMVuHPxOfRIrvQX2b99ejlGdXMoJPGV5v8cWhYJwkj3kXzOMkxR3XVNzIDY+kJZDhOzyQs4sYSFqlpcZzfvFZwlH3v8Wo55Q0iYk7S57evFRE1AWZRhBYQQfTmt3Dyfiw/ij++vf0Yv/+EHi5sC+xE/9LR4j48P+OX93+qPz4+4+riSD+WPh4jfcyv3iR3Pw+gncq52S05VR1om+9WrLbRIF77s3CqKf45nAXKJgL/Cp7U34/RzJ8XcYaiHpxJWimU4kxNqYO6cawJM2EebQhoW5w3DzQdrR5m52twlce8Bnt98Jg/WOAN5iN1u/5FbKphvlmqd5N4TT7mzzQQ85uVfikMi/YPO/CSZX03S/LJ7B3iWZalBPB4q+xkC/lq9LZ4nhCadc0AEJRThSSzoAkx9yxZK7SxOcC0YXNgJpJ6OmDCmsDEEHqMG/V62yaCHiN761dbx7lChKQ9DzavLVk1FHei5ojTy0kpv1KUCn+qernlgXhpnBd7tepwGUVx/nbd0+hDNA70Hf8H</diagram><diagram id="6y0b2vutadF9k0A0lmUK" name="Page-2">7V1rk5s2F/41nkk/mNEFJPQxe2mTadKmSdMk75cMa7M2eW3j2ji7m19fcbMBHWMcgxFedtrNIgMGPUdHz7noaECv54+/rZzl9K0/dmcDgsaPA3ozIPKHWfKfsOUpbsFC2HHLZOWNk7Zdwwfvh5s0oqR1443dde7EwPdngbfMN478xcIdBbk2Z7XyH/Kn3fuz/LcunYmrNHwYOTO19ZM3DqZxq034rv2V602m6TdjJuJP5k56cnKL9dQZ+w9xU/Ry9HZAr1e+H8R/zR+v3VnYe2m/jPhf/95OJp8/v7z96LDrj39b5M0w7pZfj7lk+wordxHUe2uSvFrwlPbXyt8sxm54DRrQK38VTP2Jv3Bmb3x/KRuxbPzmBsFTgrSzCXzZNA3ms+RT99ELPmf+/hLeyrCSo5vH5M7RwVN6sAhWT5+zB5mrwsPdZdHR7rrxy1BG5OHCX7hxy6/ebJZ8XrHjUoD9zWrklpzHEvl1VhO37H6JbLnjnGgmsPzm+nNXvoQ8YeXOnMD7npdUJxH4yfa8HajyjwTXIzCmSAE5fLIPyWHScbXhjn4Gd3wRuBOkFfDJY393Zpvkm+ZYFYXZTKrisAsfpl7gflg6UVc8yOkgj++97N5rf+avouvoHR6P78O+Xgcr//9u5hOMOBXhDe/9RZCIC0HJ8a/O3JuFMP7tzeW0QNAf7oP8/d6fO4ujgfvurgL3sbSnk09Z0hfJLMaFkcxrD7s5wUzQm2amg7Stdmy4gs24Pmzu7102GkHYjLm4Q0gvbLCZBwdbraND7F5lnqYyccr1OqYzKTAKG0T+AklSZeT1YknpY2dnS5UdP8vZUgeFLHqFfOKwNLupkNPnzo5L2o/LyDtD2x+XSEHHqZHH2iMX5rF3tmVamvHYrajqozYpUeB5OQpkJxYRkq8Y5KFwZt5E9tbNSPaSK3v9KuwIb+TMXiYfzL3xeBZpXXft/XDuoluFECx9bxFEb2JdDayb8F5S0a5jkLCCZqL8stCnTTtssWgSOJIHzlRRYwBqtDHUqILaW/+75/aoZVGz8CHQ+FlBU+epG2/l9qNNMfcJOwSc3RBw8zfrL+N/+CYY3vzx/e5xE3x7eByqM5iC10QSy2Whi1i1ycZZL+Ooxr33GDLTU7t1GyhJRGCQjUVA3U1N0+BWYWLihqX2Ooa6XfB0Dqu941WDS+n4dBS8ce7c2Tsp1YHnh6Phzg8Cfw4Mk8AvMIr11FmGN5s/TsIAl3HnrL2RsTMNvgYrz1lMZsVBUZF9SF7zTzqEDyNdKn+VBxAhRh5PigyU+zFVbE1imMJi2KK2Gf1WoZZ0RQh7d0ZTuKucpDW3dI2juB5s5WjLIZmf47BVAJooQJMQRIE4E1Iswt+mCvS+c2pHWuUxrblUtEMaW4VRSzsNtUZWunZQlw9qXkAaawP0h/fo39Vn8/UG/T4yx443Mr99GlrArM1mIZ1dL2W3ZvFm/27C3IgIi+F9AsbLQUhYmDNfRj1OJW+mV0EC0CICaBUBVDxnQGKvGIpuFxPc8GbYXj7uvkv+NYn/lR1gTr8O+JWDB/wmPk4fVb55/LTpyV0g6I3xCZRnE/aWL2YJBALIIWmKk2M1PKoqkyb8vwd8ssczxOPoftaLC3dMolSzXlz4xGT8n9+LW/rcF0b66rGIqWkrJhqDLDStZnzIMX8B7K4eSE2BDdI5SEkFk/xZqt20Hw6qXUK1Urvpc/dqd4/azQ9RJqm4pSUVLxW2CzO6atPAncYWsrN6VSw7BkjGhU9sLb+o9Ll7ulSRAeOODVjVZO2VcRkd7hi+QCqFgm574apEVyvSEXft8ZGqemC3qchhbnEjdQRoGpsCUi8Sz6F1vfYmc2dg3eKvOPUm1jS6x5Zrj00IP5vcUcbyo7tqntP5YMZmcXQTbjAKRDcysFuAQ9FqyqEIZGcAuJIe1yyuQo463XGFaLKCK+1xPTBeMWJyNj4blmBkCUrE0TayJOVrOfVeDPhVNsZ0HfZVeDyOjvnNLxcZd6qJ9JM8O7AhCdzGmOqOO4EiCPk/tRXBCw9u1iRktpL+FmW/lRoejQU7QaGDKGcvdN0WOhuaYLWSOtUddYLUJSeUCV0qTZDAESIFLpK0/TKXIXTk64XIX5YkkkYJXz65jBCDEZH5AeZdSBjryMGG512hmw6MRLKiPOJeHo+VR16QR27YdtsyCM3DvYOvHEjTKrIrwhCEZUtOPhhpIHOltai40M05QJECqUQZgDT1veRGZ2OYqY6e9sJnumEmOFUcdTpgBoQ8WwuJ6YYZMM4oah8zMAWsI3ZB6MDvedgxCxMKC58JY+3zsJR2dVMCaS+BR2lBCykzlw1qwXMbA8BElUm32hH2211rt8pCZSdHPshlfo1mzlqaJYNs8hc+WgqyqVyw4zNBL5vKBZ5otpZVW/rcgIYKO3W/hsq4WPkevRKV4Q0fOghkJ4fWJUEz50mqgeQb7lYn3f8+LEeAsDx1vpx5o1ApbtVU/PB71FRetg9RtYJslZit5/PKYpTWWUkDAUAuSspZs1qGmwblTSkaVF5qsFc0dSgaoCQdvLiR6KVo9pPxjiga1CuakLtIlmOS0sXcGPMtEcrqHhsboilbr0pKIzhY9wBg55RE+NwZPYENJDnVoZJ48uidu5JkPSK+P1EH7+AQp1W5RMU0mMqaIPmGdyFF38nKltamcUK7QGlj5ZZctQNcvZE4cKO4Z5QbRZKzfe0ThEl1n1/7CzkeN6MgNsiO0SoMir4ArOWU2zmLceHB4naS/ZqOGWtR5ndjes3OW2qCGxTw9EOGWlEY69Nh+xI7f5cHJMzIsqL/JNDWtRs4UdfC2YDaY2s3ia1VKF4oGJir3VQhPJgcYyAsozs5DqfxLD/Wnh2n1vXBqROne7poQo9x6gXvlHhcrHS0Vzi49MHL2O1OFnBXbQ0sSE5xU8sQuR9Fh0MVaLdDu34YoAm6OGjPkoev3ao4iV0+D4aKQ9hpVQrKrLJGuYc29SHqU/wDRhPKCO3RBNHEyNYeTqiEVmj3/C+JWY6mXhixDOOV1q1seoF/ycQuO2YbkeM9RcfAX8ifHdJCcUbcdrwydVxfPNlRV5pSVrHKfnMcx6pANZ/pyn8JmbqI1ES6a0+rQdZ6iYgCFSP1wrNBqnqBeFKkuelhQWRVZTeoZzeV2I2yWk1YkIY+L6WBCKy26yJ/wLmogOx1OCuwMREkFlN53S4zsLX14JZ9eNro1wQVwRQFJDNlRDRdEWSp6w/7FUE7QO1COotpQoCedZ0CU3Mw+/VAux2VlT1itMBMTWfr1wPtHWUUaYAY5FbqOdhFcjCcToLbdRgaEDDWqeIoqgDSXgCrK0BO8gIowJ1IziuAvEJULN4e8LRuOXpfP4uy4vrRIZCwZWZGcc4ZLsR2O4L6R20Fb1ubad7lSFffOEddvJZ6Q7LySqFYhMmaW1MCFEX8uPYWk59ImoUy+zfLZbhK4GdSeqHbOeNvEvbFyLucdNua5IsqfllMBbhzJ+SYY42pxAqbM7WiEplSQtzApDyfhqaB09zgtIjRmOeaV3AptRmpLYe8+lb1B/XieUO0HKowVZPl6eKx5YJdLxinTnPZLTVhxQuGp7p9rFaxH1t1/fRQwlAy7dOUbGiJYw8mBCZGRHs09/uLqi5YlT25175+lQvugsVeSpeWak8neZPihNU5GEMrGxuL6PIKY/0CqA8uUB8sLb1ssmDb2WocGqQ1qdzWEn1rwq64F651EDut9C9vMBHxwqBlwrC55mhe4tKZZtDEiGoPJ4amv44UNntGJbdrElATF93zDB2eTM5b5KzWHVd6gdRbIHHq6ky3W5ECapZuSHBeaeRQ0Kgp2xEUnt523Cs8Q6jQx1mNR7sCtW0n6lBMXMW4cv1KbDbWXRW4o3a29hbjzoYZ7AqR786tM6kJq66FGRpcD31hUHYgzFAhZt2DmZjS+ocZoBh6U1QRPWuqeLw44bw4DVW74szEsULkX4t0lT3EkXIBZ/CZyMBNZfAJyDWgPX0Ue6DpDH0UDUbD9fL4Ho9Vx+hjWparh/IC6KN4NnG1k8HsAH0UkG+k9zQ2Qh+PFqcifSQt00dcZV/IM1QdbbwSaIXyr8mL11fgM7m0UMocmywvBCQVowNF0euqZY7ByNhe+tvZfQpIWvNvuw8ruHenBAS2QJobdIS1McYualeUNgr27hnPDKll28It0olVjIUfGOCyh5ynzGnJ9FbyzcWdhuXXJuuu9m2koF5DWeEa+Uf8JDVrHchk6Ld70V6NmnZBYEwbLHjSwhYvGNyr7ALZy8E65lttV0Evirr14omToebOuJqGEUtdOtsdbDC0/vu8HjkMbohbk+nf9cqBjIoCYhwsGaGT8Y9T/HpADwMqm8ASDHoB2lffrQ6oZXUA0GqlyzPFPUlf3HNf0mdqz6cq2rQObUV47p2Wq+jj58FQRVWGmm4Mqg1DhWLsR1quYF2lt2/eNW5I2vowYJJnwBSDhiS0B1tj1T5wHZvQgtguV+7YGwUSMoL8TbDcBJFP4s6dVYW88qyuTLh6WkAcFfYDgksA2kCl1LStAfzVPMiRvxg5gbuQ//fTbWGNRR5BRrZlnrKeIKgkVB0zrLm6vfvz9eLTj8+vxaePr3/7++rL+6FoYzr9GVf64aUwJ82vjFWcXjWbXVmFROTue1G5XTQ9oX1d4GJqjQ2dKingZ6WmjYwLq+q4aG2DRxCcKkHaOLFzNwfN3PvgdCE+OtUTK9EvaqT14LL7jNuWYZuMmRZl3Ny6HnOyjrhhioz91lQRwb1GeK6+qXxCFH2IXoQF06/n0hKPtlqO91wurtrEyarN5Br5yXLqvYjOcuJPtleNw+NfOmrT76kLWC7G1RfjqZJjCkAvNlYUlUD594pskKJs0AOyQbKykRWKXgZUjWKbBmUYhdFpU1oOJJ/JgQ1mcpNjgpgtT0pLD+TCKGeVGMhBoEgMzUgMqSIxtJeY6hUzuCGJHcXCtG0bF+kWw4bJKcfIYpZt2lw1WhoTGPThTzFaB69uXn35/tfk9+Fm6P0xhJwOBXkJSwDQm3TeOSwuJD/59OJSrmBYgYuQY+WlqTrd8Au2wszbShiDkT/N+4yq5kfw2s3jn8rtYsXEc7s8r6t4vrALQndaShcslcD2Jr1UNiKVacGDjkslEefINEQVTOfuu5SInbe6h8wwabYcOJDqLJg6aVHBDSb2C8qeiUservww4rGDLkzCfOuP3fCM/wA=</diagram><diagram id="9LwbIEWWd3nnRWKj0ZQl" name="Page-3">7V1bk5tGFv41UxU/mOr75dEex0ltJVupTWo32RcXIzEzbBihlZiL8+tzmoskGpCRBhrQ4LJlAa0G+nznnO5zvu6+otcPLz9s/PX9z/EyiK4IWr5c0U9XhGBJBPxnznzNzmjNsxN3m3CZF9qf+DX8K8hPovzsY7gMtqWCSRxHSbgun1zEq1WwSErn/M0mfi4Xu42j8l3X/l1QOfHrwo+qZ/8TLpP77Kwicn/+xyC8uy/ujIXOrjz4ReG8iu29v4yfs1Ppy9Hvr+j1Jo6T7NvDy3UQmcYr2uWXZ3G7/u8Pv6Et/Tl6/Mc2+J7/+D5rls+n/GT3CptglXRbNc2qfvKjx7y9rgiDv09fcPYlf/Xka9GeUCmIDg4+Pt+HSfDr2l+YK8+AHjh3nzxEcITh620YRddxFG/S39EbvFzeIji/TTbxn8HBFYwk1abC23iV5ACCG2fHn/2HMDLQ+y18ABgR9M/gGT7/FT/4KyjSsmXyFnwKNknwciDUvKV+COKHINl8hSL5VY61x4gUkjOCGaJ5O+Q6gEkO7uc9ogTxJEEcYSkYwgznt7w/wFZWhCGFBSWCU5TX6ucQv9s9xl6M8CWX5AlSrZHaEpQiP1zFK9Pam/hxtQxMNUYq8Sa5j+/ilR/9FMfrXIT/C5Lkay4R/zGJywIOXsLkd/Nzj+dHf+SVme+fXg4PvuYHS397n94zrWC1/GA0fP9McOZzaN41LWwh6FYtgsWiDkE3ijOOTobDNn7cLIIj5XJLl/ibu+BYfbkWmUZuAy6B6tHUOQ54o3aTN63dgmCPMsU0kkpqIsraTSn1CNNUSck5RmTUqi5Hq+rf0u6SKRhc1fXEVV05xQE+RMEOE004gPf7/fDg4FfmcP+z9GiarqItfvhI8aMbXQV9066CIeZpQSTXEhEkpZiuqyhGZAcyHMgxFAZhbwP+2Ol5s0E40QYI5Gss63BGPkmBerABmEzciRTwHMCL4G9A5lJEP1b7X7xAxQE8dh4JaOGcR+QAjkcCCFeeYpIiyiVmRPFRewAxXCdx2urdtndXaNH41Lsp0Hf/5Up+NDouP13xa2gwP4rSCDAcmb8EgZf+LjClcFqIfMwOKBy8a7AMoFpJGRN+FN6Bln5agDwCENZHo4Dhwo8+5BcewuUySuEXbMO//Ju0KiPJdRyukrQ5+Mcr/snUBYjbZuDDFRDk+DlETHHqwKaASD5G/k0Q/QI1JWFc+2g/WQV2j9iXqcFKlo2L6XtqYxAEhp4o0xXjwiiqmhPaG4hYDYgs0S8eN091o7ZF5G+34aKMirKp6VvLX6uUB+3Oa5qdH2n2/F6/GCwfuBbOPaaxFlQb94F5WfoMrh44BVGuPrNZeY2HYX/7Jop7RDGCCJFUYEXKNyHag8bMfJf5lOW7ZBavcheQqv/1oFiupGe+KdbKYwiZAVTuBS2oZrfbA3cns1dguS42OmP5fCwz5nHMlQRTJRHSvUAZCU8pABB8UdBq5bG4KyhTBUN+qgjX8BjICghAF9FDKrfXSkstHEBZzFDuFMqYephzQTjgFiFcxjJFHEAIb6pFGvdR9DwwM8k9TiglDCvBNWWW79fIIwiQLjP/z3sBM9hkuAuFrkUW4iq/KaEwtsG5ygKiXWBZzljuFMtHTSYlxINeAX2tXYZquKBg9jloBWK4fBPw/QRhuxfrFMlUg67tHIPizAGS1YzkTpGswSoTRgTTmhXj7B2OkSdRE8LawlgQ5jHFdr2Ysj1W2qPM7qJ2jeKjj0Cp8Chgd9ePcgDiurTQDOJXgJh5QsNITEI3Esb31BJwXTbpZGssmUcBI5gx0BVA0gDWWGDkUUkYItAFhptaXSimPE102j/mGivqoGNRiLS7ABiZegDsNF06IWKOYAyEmOICZKs4p3Zcg8LlymBwsKhWQe98FTLIRYVG+0IGZcTTGrpnFCMkYYBsEa+gDwejqsIoSFLNpjiGBnHp/jonrYzQ/VFPI0UkJkJgicoDBYI0jDeFNv9eM7IWxIQTGWcSuhRUyrLzwUIAyLBGPHOA1pinqyjR0TfFFHvGO8ssRsSVg64cqWaAFvEKoPa4SACRRuYiMqbpZlMCuPj/oyG4pzbifWZbPkABrNYvKSCL6/DtzvyPodDDOgoXwXZMxq+UXW7QuwMzCKO3HumnDBkPyDDPA0BlcHCT9WHQ25fUaEI164Pxidr4OiM4Z306NYJMYk8gSjRRRFBWGeQhD+Od8JU80whSMEAEgfHhAswNjCrs8KLyYAyy64Phqz6sIBPUk3ALijDiitlZAcHhVTVlLBvxOLCB1aRPvAnvwpUfdWcA78w8qhx4YzOB1CTgllWtWyyDG3Xj0AQC9sEEgi8Gpw+qIKxBapr4lgSJg0BHyQKSnizg++h38nm5xVo8/fnbn/HmRn4O3zcNHYeeJIXF1bDUKFIWG6+xmaxGUKwvQbmlL17EvKfadswd/CHd6ZhifNPFboLIT8Knw0I9SL9usDaG2U6Dq6kuhxmhSzCsmlKnatodDbFjLRMttUyPSsvGO7d0tBPOXiX9cdlYPpzqvrVpZq9CDRkVauoISmOYXDa0Z2a8nBoZ3DNXZxNf1AyxjpWxZlbABEz4cDOFXzfHayDhjcuSNk3THXqW1tCW1A5FCKQ8XpM+dBqNqMaNrouMixVuHHXqpFXeOI3Z9yVdIbVHyq6Sa+IRUhEwRk4l3EQduOLXQeKnrTlNGkC/+S9pyVJQj1aVtTbN1UWuv16WTeGjlNPxXWpgr5++4Hcp02M6Qu2t7yrLUSUusYdaCrGLaY71Qmya5lgWIp2FmAuRWQMQImutqlsh1iWcq0IksxBzzaNWfJey1ua0PyG2mF/XNNH2Jk6S+KFGRkls9Vm39/7aVPbwkmZdvRt/Gy68/XDlS7IJ/dVdZEukZf8Wes7/LvCzXWerU96GL2Zc1Js0lTSDXrsvS5ne/6mKlhGPaS4wp4qln1VJc+1prfYleF+Cbw72TCNb6k7SnHt2p5Z+Q9LESFEjKTT0hMwnq9HphjLdi7pustaUMm7ORA2aN2lB181lmlIAd0CdFnxaoq6LMGWBiu0a2rKRGXWbS+BDFtrwH9ZpM1Nq1u9JcqmsUqlsUqnYZczv0s86olWFZbUbZ6/vw++gW5iR9VOu/gH5PntwaIjs2YufvuXeYo3XUdzT+hhCsXDZdaydqjND8IIgqJBnhYAElR4aEwbrInszBi8Hg1KQMgBN5AOPCIB14UhLYK9g3Z/WrN9MfrUkybfOkTVMqlGk5KhaDZ5Onj2E7OB0TzOElBU/ZbsgeFcc+HpgnTIPaKJ5qb6MBlfKxkZdzFu5zFvUTs+ZUrylN2GRsoLtdrIZKh9M3JLnLpie3poFx1r6L0cE9ebQ6DTiZa7sqio6Z4OpqtttNaa5Um49tx+11M0iaTUW3XRLi7tcXntrAIzNOOvhVH6mtp8EHDEq4NC6cOWUkiN9eXVWDPvH4tVpdX7oRdHbh3f+dJq2n1Zja9PgxE9I4iMz2k1EwZlIX46aKMwGJ9LTphCXWX8tzXjUcXTTBdlqrtJ3VnbkLUc0KS77aKmqQTLiNPFBL3xhfkepElZe6VmdvbazKKdDihXtO0+HWA9c7LTXay6EXvjK+Y6wRrVZzmz3pwyY85GHHSHPWp+0MIH9Iu/C17l3hDx8FHlmSxK+v0rOwyGz8syKHq22K1TaL8NdoPLC16x3hUpbdBbXvv2+ClZFsicLaGNNuMDahS8t7whr0oqxcVZnEU9GnraqpUer7WzRXDtgyPrHIasL3844PBmH2BLdmd6Wc3mE49WbBVTW40viAHlNs8fPCmuQOaxx6DlJWZ41uwy4DWswcpqwyRzDap1nQmX2iC665MMJuy663JtPcb9vgLO+jccaXYFU1KOocRjWvqdDrOkwknNPH9xX9eNxKtN8JYUXcuB1qsH0ifJ/2yzp35uDsfbRlbw6YcApDZjVhc1nGjB4A2EFloemAReB7hEyzaZFGCp2TP1m7rnQjZHkntnUp833paoY0YZR5GC6OtxiiRPnAXPcUjn5uKhAxYaII7TO0+IBtwbAyKxzMXiZecCjB44cF3CawnlvnQesrP0nBvfqvBqNmXnAnaowm6jtH25nkonzgNtLfGRGu4la+tZ5wHbYZAw8YN4U48rXPznYsPld3Y7Nb4cufAgkCseRtcJm5dHsJTh3j9gXvoS1fLCUqgoupyuW8pkM2kHSRitdthvn0j8xstJ6ffE/7Sd2wjzmM/+zE7CVszBng01zR1iT1n1ccI35zOrsAmvWgmEdsYs1tZa+cMIu1sy6qwt2cZEinXH4KhxyW3RncjztTZf74tZVsOaCXSxmVmcHWMMIW6G7bujFGFlbFrjhF1dv64JgLOriwjMUT4cis2R3pseFgYVN9u3H8GFErEd2wSoumsVpkGSS5GM7SHKaXpwQSkOV5Y7r+Mhuox2iaQGE18HkAmnLrmCiiq3rjtCYHWOkLjjem+e6VBqz6UTZVN9uuMvQn7HX8ndDXq57JTfsZVFNAczs5TOIdsSGzcD0ZTEH3zswNZWp6vzMeKi9L6Ci/XSRmT3jc59l7NeMVMPv8Sa8C1d+VGtD0t3lcjmPzYpQQTVdVkG+WAY36qZHK0KFvd4C2u3SO5gdmWPdXcQYK/0KJSyJtbUkSlWqsvkBXcUZqT2jSmHlxppUA9uzNTnVmihUBQrr0Z7A4SY2e+nskWDE8nO8DEyJvwE=</diagram></mxfile>
2207.11761/main_diagram/main_diagram.pdf ADDED
Binary file (86.6 kB). View file
 
2207.11761/paper_text/intro_method.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Real world objects are often connected and interdependent, with graph-structured data collected in many applications. Recently, graph neural network (GNN), which learns tasks utilizing graph-structured data, has attracted much attention. Initial models such as GraphSage [Hamilton *et al.*, 2017], Graph Convolution Network (GCN) [Kipf and Welling, 2016] and Graph Attention Network (GAT) [Velickovi ˇ c´ *et al.*, 2017] focused on homogeneous graphs in which all nodes and edges are of the same type. Works to extend these models to heterogeneous graphs, which have multiple node or edge types, have then been proposed [Yun *et al.*, 2019] as the majority of real-world graphs are heterogeneous in nature. These models typically employ metapaths [Fu *et al.*, 2020].
4
+
5
+ A metapath is a predefined sequence of node types describing a multi-hop relationship between nodes [Dong *et al.*, 2017]. For instance, in a citation network with author, paper and venue node types, the metapath AuthorPaper-Author (APA) represents co-author relationship. Moreover, Author-Paper-Venue-Paper-Author (APVPA) connects two authors who published papers at the same venue. To generalize further, the concept of a metagraph is introduced [Fang *et al.*, 2019]. A metagraph is a nonlinear combination of metapaths or more specifically, a subgraph of node types defined on the network schema, describing a more elaborated pairwise relation between two nodes through auxiliary nodes. We can therefore view metapaths and metagraphs as multihop relations between two nodes.
6
+
7
+ Although metapath-based methods can achieve state of the art performances, they have at least one of the following limitations: (1) The model is sensitive to metapaths with suboptimal paths leading to suboptimal results [Hussein *et al.*, 2018]. Metapath selection requires domain knowledge of the dataset and the task at hand. Likewise, metagraphs are predefined based on domain knowledge. Alternatively, heuristic algorithms can be employed to extract the most common structures to form metagraphs [Fang *et al.*, 2019]. Yet, the generated metagraphs may not necessarily be useful [Yang *et al.*, 2018]. (2) Metapath-based models do not incorporate features from non-target nodes along the metapaths, thus discarding potentially useful information [Fu *et al.*, 2020]. Examples include HERec [Shi *et al.*, 2019] and Heterogeneous graph Attention Network (HAN) [Wang *et al.*, 2019]. (3) The model relies on *structures between two nodes* to capture complex pairwise relations and semantics. This, however, is not always sufficient to capture more complex interactions in graphs which simultaneously involve multiple target nodes and cannot be reduced to pairwise relationships, especially for heterogeneous graphs given their richer semantics [Bunch *et al.*, 2020].
8
+
9
+ Several works have aimed to improve the expressive power of GNNs and generate more accurate representations of heterogeneous graph components without utilizing metapaths or metagraphs. One potential alternative is a simplicial complex. Simplicial complexes are natural extensions of graphs that can represent high-order interactions between nodes. A graph defines pairwise relationships between elements of a vertex set. Meanwhile, a simplicial complex defines higher order relations, e.g., a 3-tuple is a triangle, a 4-tuple is a tetrahedron and so on (for a proper definition, see Section 3). For instance, a triangle made up of three author vertices in a citation network indicates co-authorship of the same paper between the three authors, whereas in a graph, edges between pairs of these three authors only tell us that they are pairwise co-authors with each other on some papers. These structures are already employed in Topological Data analysis (TDA) to extract information from data and in other applications such as tumor progression analysis [Roman *et al.*, 2015] and brain network analysis [Giusti *et al.*, 2016] to represent complex interactions between elements. The GNN literature [Bunch *et al.*, 2020; Bodnar *et al.*, 2021] on simplicial complexes thus far have focused on homogeneous graphs. These works cannot be directly applied to heterogeneous graphs.
10
+
11
+ In this paper, we propose a general framework, SGAT, which extends GAT to heterogeneous graphs with simplicial complexes. SGAT learns on heterogeneous graphs using simplicial complexes to model higher order relations and passing messages between the higher-order simplices. We first describe a procedure to generate k-order homogeneous simplices from a heterogeneous graph since heterogeneous datasets do not always possess higher-order simplices, given their innate schemas. In order to avoid discarding potentially useful information when transforming the heterogeneous graph into homogeneous simplices, we populate the ksimplices, for k ≥ 1, with non-target node features and learn the importance of each of the k-order simplicies through attention mechanisms with upper adjacencies. Overall, the contributions of this paper are as follows:
12
+
13
+ - We develop a procedure to construct a simplicial complex from a heterogeneous graph. Our proposed procedure converts the graph into a homogeneous simplicial complex without loss of feature information.
14
+ - We propose GAT-like attention mechanisms that operate on simplicial complexes. We utilize upper adjacencies to pass messages between higher-order simplices to learn effective embeddings that capture higher order interactions. We also introduce a variant model SGAT-EF that incorporates edge features.
15
+ - We apply SGAT to the node classification task on standard heterogeneous graph datasets, which demonstrate that our proposed approach outperforms current stateof-the-art models. We additionally assess the ability of our model in extracting structural information using random node features.
16
+
17
+ # Method
18
+
19
+ **Input**: The adjacency list of heterogeneous graph Adj\_list, Node features X.
20
+
21
+ Number of shared non-target neighbours $\epsilon$ ,
22
+
23
+ Number of hops away $\eta$ ,
24
+
25
+ Maximal k-order considered, K,
26
+
27
+ The maximum simplex order to construct $\lambda$ .
28
+
29
+ **Output**: Set of all k-simplices, AllKSimplices.
30
+
31
+ - 1: ▷ DFSFindPaths is a modified depth first search that breaks at $2\eta$ depth. To get paths that starts and ends with target node type. Each path should contain at least one non-target node.
32
+ - 2: ▷ Each path is of form [target\_src, node\_k(s), target\_dst], where node\_k is in $\theta$ and $\theta$ is the set of intermediate nodes along the path.
33
+ - 3: Initialise PathList as empty list.
34
+ - 4: for $v \in V$ do
35
+
36
+ 10:
37
+
38
+ 16:
39
+
40
+ 18:
41
+
42
+ - PathList $\leftarrow$ DFSFindPaths(Adj\_list, $v, 2\eta$ )
43
+ - 6: $\triangleright$ The middle node refers to the non-target node $\eta$ hops away from the target\_src and target\_dst.
44
+ - 7: Initialise MidNodeNeighborDict, $D_{mid}$
45
+ - 8: ▷ where key is MidNodeID and value is its set of unique Neighbors.
46
+ - 9: **for** path $\in$ PathList **do**
47
+ - $D_{mid}$ [path.MidNodeID].insert(path.src)
48
+ - $D_{mid}$ [path.MidNodeID].insert(path.dst)
49
+ - 12: Initialise KSimplicesDict
50
+ - 13: ▷ where key is the Neighbors and value is the list of MidNodeID
51
+ - 14: **for** MidNodeID, Neighbors $\in D_{mid}$ **do**
52
+ - if $2 < \text{size}(\text{Neighbors}) < \lambda$ then
53
+ - KSimplicesDict[Neighbors].insert(MidNodeID)
54
+ - 17: for Neighbors, MidNodeIDList in KSimplicesDict do
55
+ - **if** size(MidNodeIDList) $\geq \epsilon$ **then**
56
+ - 19: ⊳ Simplex tree is a data structure in Gudhi library 20:
57
+ - SimplexTree.insert(Neighbors)
58
+ - 21: $\triangleright$ Get up to K simplices from SimplexTree
59
+ - 22: AllKSimplices ← SimplexTree.get\_simplices()
60
+ - 23: **return** AllKSimplices
61
+
62
+ ![](_page_6_Figure_40.jpeg)
63
+
64
+ Figure 5: $\lambda$ against $\gamma$ (left) and $\gamma$ against performance score (right)
65
+
66
+ $\epsilon_2^1 = 4$ . Besides the parameter being tested, the other parameters assume their default values. When $\epsilon_1^1$ and $\epsilon_2^1$ are increased, the number of constructed edges decreases. We observe that increasing $\epsilon_1^1$ improves the performance until the default value of $\epsilon_1^1=3$ is reached. After which, the performance deteriorates. This phenomenon is similar for $\epsilon_2^1$ . Increasing $\epsilon_\eta^k$ reduces noise in the model as it removes weaker edges (those that shared fewer nodes).
67
+
68
+ ![](_page_7_Figure_1.jpeg)
69
+
70
+ Figure 6: $\epsilon_1^1$ against $\gamma$ (left) and $\epsilon_2^1$ against $\gamma$ (right)
71
+
72
+ ![](_page_7_Figure_3.jpeg)
73
+
74
+ Figure 7: $\gamma$ against performance score
75
+
76
+ Lastly, we examine how different combinations of $\epsilon_1^1$ and $\epsilon_2^1$ , each chosen in the range of [1,5], affect SGAT's performance. Interestingly, we found that the performance of SGAT fluctuates minimally across different values of these parameters when the ratio of triangles to edges, $\gamma$ is similar.
77
+
78
+ In this section, we give a brief overview of simplicial complexes. We refer interested readers to [Hatcher, 2000] for more details. Here, we discuss some properties of simplicial complexes and the notion of upper adjacency, which we use in our proposed SGAT model.
79
+
80
+ A simplicial complex is a set consisting of vertices, edges and other higher order counterparts, all of which are known as k-simplices, k > 0, defined as follows.
81
+
82
+ **Definition 1** (k-simplex). A standard (unit) k-simplex is defined as
83
+
84
+ $$\sigma^{k} = \left\{ (\tilde{v}_{0}, \dots, \tilde{v}_{k}) \in \mathbb{R}^{k+1}_{\geq 0} : \sum_{i=0}^{k} \tilde{v}_{i} = 1 \right\}.$$
85
+ (12)
86
+
87
+ Any topological space that is homeomorphic to the standard k-simplex and thus, share the same topological characteristics is called a k-simplex.
88
+
89
+ As an example, let $v_0,\ldots,v_k$ be vertices of a graph embedded in a Euclidean space so that they are affinely independent (i.e., $v_1-v_0,\ldots,v_k-v_0$ are linearly independent). Then the set
90
+
91
+ $$\left\{ \sum_{i=0}^{k} \alpha_i v_i : \alpha_i \ge 0, \sum_{i=0}^{k} \alpha_i = 1 \right\}$$
92
+
93
+ is a k-simplex. In particular, the vertices $\{v_i\}$ are instances of $\sigma^0$ while the edges $(v_i, v_j)$ , $i \neq j$ , are instances of $\sigma^1$ . Subsequently, it is possible to geometrically interpret 0-simplices $\sigma^0$ as vertices, 1-simplices $\sigma^1$ as edges, 2-simplices $\sigma^2$ as triangles, 3-simplices $\sigma^3$ as tetrahedron and so on.
94
+
95
+ A face of a k-simplex is a (k-1)-simplex that we obtain from (12) by restricting a fixed coordinate in $(\tilde{v}_0,\ldots,\tilde{v}_k)$ to be zero. Specifically, a face of a k-simplex is a set containing elements of the form $(\tilde{v}_0,\ldots,\tilde{v}_{j-1},\tilde{v}_{j+1},\ldots,\tilde{v}_k)$ . A k-simplex has k+1 faces.
96
+
97
+ A simplicial complex is a class of topological spaces that encodes higher-order relationships between vertices. The formal definition is as below.
98
+
99
+ **Definition 2** (Simplicial complex). A simplicial complex $\chi$ is a finite set of simplices such that the following holds.
100
+
101
+ - Every face of a simplex in $\chi$ is also in $\chi$ .
102
+ - The non-empty intersection of any two simplices $\sigma_1, \sigma_2$ in $\chi$ is a face of both $\sigma_1$ and $\sigma_2$ .
103
+
104
+ It is also possible to produce a concrete geometric object for each simplicial complex $\chi$ . The geometric realisation of a simplicial complex is a topological space formed by glueing simplices together along their common faces [Ji *et al.*, 2022]. For instance, a simplicial complex of dimension 1, consists of two kinds of simplices $\sigma^0$ and $\sigma^1$ . By glueing $\sigma^1$ with common $\sigma^0$ , we obtain a graph in the usual sense. This means that $\chi^0$ is the set of vertices V in the graph and $\chi^1$ is the set of edges E in the graph.
105
+
106
+ Four kinds of adjacencies can be identified between simplices. They are boundary adjacencies, co-boundary adjacencies, lower-adjacencies and upper-adjacencies [Barbarossa and Sardellitti, 2020]. In this paper, we utilize only upper-adjacency so that our SGAT model is equivalent to GAT when K=1. We say that two k-simplices in $\chi^k$ are upper-adjacent if they are faces of the same (k+1)-simplex. The upper adjacency matrix $A^k \in \mathbb{R}^{|\chi^k| \times |\chi^k|}$ of $\chi^k = \{\sigma_1^k \dots \sigma_{|\chi^k|}^k\}$ indicates whether pairs of k-simplices are upper-adjacent. The upper adjacency matrix of 0-simplices (nodes), $A^0$ , is the usual adjacency matrix of a graph.
2209.06941/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-08-24T21:22:46.086Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" etag="94vcnPhlXMagCMUzmfu6" version="20.2.7" type="device"><diagram id="cX1Ba-vnPtMQq--5C0wD" name="Page-1">7Vxbk5s2FP41nmkf7NGF6+OuvUnaJO02206avHhkkG0l2LgY79r59ZUA2SDELusFX7L2i9GREHC+I50rdHB/tn4bkcX0Y+jToIOAv+7gQQchiFyH/wnKJqU42EgJk4j52aAd4Y79oBkRZNQV8+myMDAOwyBmiyLRC+dz6sUFGomi8KE4bBwGxasuyISWCHceCcrUz8yPpynVtswd/R1lk2l2ZQSz5x0R7/skClfz7HLzcE7TnhmRs2RzL6fEDx9SUvLU+KaD+1EYxunRbN2ngeCqZFjKmjcVvds7jug8rnNCfzbp408OuvpzhUfvg4/k5mrdRXY6zT0JVhkrsruNN5I3yeNRMQvs4OuHKYvp3YJ4oveBSwOnTeNZkHWPw3n8hsxYIAThLQ2jCSMZOUMdimHLOAq/034YhFFyEezb7ggAMZIFQY4+HlPL8zg9ICMaXIeRTyPZnTBb9myBkL0+HZNVwFlzHfK7ZbG4IUNcocy4jJf3NIrpOkfKGMkfY0bjaMOHyF7DSk/JxB1inLYfdsJj4Iy305zgOKhnZqeSTGQn29l34PGDDL/nYGmdCJaW59DRuIylT6gz9loBxIQFPMpoWAYoowExeDkW92/ffzZvv/4z6N8M/et/l9Nr52tXt6ysRBhHBUCs/1Zi/Sc87S4Tpl7xARAu1gljZD8/moj/992PlMyXcjJ+byPZVcKa3ybfPmkNnIswvUl+OmBB8uM9k4j4jKM1YBHfjVk45/3LcCWYnTxK7pz0pxGbBlBHbhF2aDgl4G0N7pivwpaAlxKVQ/7jh1tOeEeJ//LlKBmo2SlrwNYCy7HVw7AW023QQ61xHZa43jH7vyyGHfuadVD/W8ceCLLZH0fE48T/Cj3mjbj+apg07Gs+bLmaiRHfEtYnI3jrPT965ESz/2sJYM7VuIhiEZ1MheWhzEgkYBOxrDyOFeX0a4ER4xbLVdYxY74fVK3unWCBJzby7F6RRsAaEBcDOcUVahklYcG6FdrWxoyRVlIUWC9IlpDELn7S5Dksku7T5g6d+1fCO+CtURB63/+esnkRRD7iDQtkq7gfV3KN+gVvosyzHE9MDU8kLaIBidl90QfRMSq7wm3I+J3s9mLTLkAinTA5A9fIkUezk/KegTqPAx6dJybRhMaleRLUtg+9P5BGWWW+ciAN3AyQ6jxtA4nbBpLjF23+FXthzzRMSfiSEAxDtgfrbLdMW5t865ZGjD+n2IdT4prF6YQGF/u0nc6HZHM3nWhscg11sko5SwHsPO27pQjVGHgikovtojlo2D1ggd1P8XXrCjJWrEwLqwZ726JsHFOUrX0kuVL4zl2m9lVrqhA5BxciXUCmiSjAHeMWIIlECOXIgYAWnHoDFpWYaW/9xuO59abebViy2S/8P2aBT7kD8WPrKsZD4TYOhUfBz6oYgpIhv/5U7obTUDwPm0W9Uo4x6AJ6rTkbpsa0qY//j5T4B50I4rcL4rplbyvOjHtkxGtkRpqyABTtb9mN6v8TVevQ2tPLKUUiTNQDyu20rNlNp23hOHU0DdcoguAAVfnWxdOEKp72YcFsPZCUbuQysXwW8CIVE9AUuOCg4Fq64FJqMzOtDb7L6He9VM0KU5zNWcxIoLXGhSHwQcSPvWC15Psxm0+2QeStdc4qrfOz0fuaioM98rRKPgAbPVxO2llaTd+zUEvKXk7cvKfG0RDy8PO5aSYqumkOOgE3TdbrtLOVn4OVZbhFWCxFl9bdty30+DwV2zbnLNnkhi3EgGUrG3uNapef3AQDds92XEW9Wj17TzvMAMrebKNamDeFqP2oqlZ07Sz0+fHQ47dy0bX6HdqCRRPdhFyBurmfZrvWlKy1qXfdst4dkJhwytVqMuOPTpLiIhXG8yubsYq+L7J7MsOQ475zWGXpliMcN3Mv9Ll4nz/DoZIKFRzHeeHXGJ0a4U9iC7glAKCMWrxeHbZ1B1+Y64H24/O0rLog0DkQrxrJfWsYVCQPXMMAQTnS/ypUkt5/a08l6TPR5VXEDb71kP1UGZMXu+ClhPYBavC0eLVa8APOM3KqOOC20wN5q9vYb1s0law4rJnxqHbIK64D9NfZSUk6476brlaMWnXlzyFqY1Vw/cVhG1u8b7Qrx4L1PPrnCo1VIZytCo0mP8t1hcixJ8l23piQ2YzIYIGo7B+cvCLJ5dQr9EoDmt98OtWuD8C3pEU0yVSOZDxEQvOfa7FEo8UREBeNNem6HwsxTcY0QQxeEKtADFrHRQzq35AaD0X50pTG5IKcLKI3FGVmuj1UN0XZFnr68rMcenLtXSDUQXjsxaepJlfhQxf4quA7traDGv+Ewzcd5ko/JZLCzpSVA5HMQ2kX6gVqgZmplAcde6XqvYp9oUYXqCuhPvqqbqSG87nMhOVQhXwFTR5/yR3r3z2rROPp98w6pxT2UIIVCOxZrAItVd/Xe9HnuXGOkllhKFLYQphD8ugkxRS8CjHtquEt9RM+tYO4e6W2Xiil2VXaFdLWE9la6euZr0L+sCJ+EPQsJ/eS7b4phUOlzPWpI13uSKn2GtARI0suAwh8oouILnfpV/CBkmheLK09q7qvdzS4p2KmDm688iv3MkblB7Gw5ZQFFoFtXVLzuULdK9UK4L/NxHf6EOjvKukv8Gr8Y0eBF7rm9rtyOYyR/NpNEeOXA/yJYXz7dvA1Nv+6I/SbtelGRvvJ4ErWncg+bSkvnNvKDHU3ZlvNBuJ2DAX1OnazWbvf779Yo9D/vBnY911w9cc7177VCkkLr+PwOTRlwmibFeSDyUws4flouUhOBpdXeGrWDZnqty8P/QqPVrBa/2DHqe8+poN7OaMQFPciaLj7bUYWeNa0zVmNi4lrUHjroA9gtLk1ou7N3V27OuYcUFY/FbX3W9Tqt6JKE+2NJG/uvmqcDt99NBrf/A8=</diagram></mxfile>
2209.06941/main_diagram/main_diagram.pdf ADDED
Binary file (60.2 kB). View file
 
2209.06941/paper_text/intro_method.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Self-supervised learning (SSL) has achieved superior performances and outperformed supervised learning models in different research areas such as computer vision [\[8,](#page-7-0) [7\]](#page-7-1), natural language processing [\[13\]](#page-7-2), and more recently medical image analysis [\[1,](#page-7-3) [2,](#page-7-4) [51\]](#page-8-0) and bio-informatics [\[20\]](#page-7-5). SSL algorithms learn representations from large scale unlabeled data by solving a *pretext task* such as solving jigsaw puzzles [\[37\]](#page-8-1), predicting geometric transformations [\[21\]](#page-7-6), Bregman divergence learning [\[45\]](#page-8-2), predicting reverse-complement of genome sequences [\[20\]](#page-7-5), the relative positioning of patches [\[15\]](#page-7-7), etc. The representations learned by performing such task can then be used as a starting point for different *downstream* tasks such as classification [\[54\]](#page-8-3), semi-supervised learning [\[3\]](#page-7-8), clustering [\[40\]](#page-8-4), or image generation [\[25\]](#page-7-9). The performance of self-supervised learning was recently further improved by contrastive methods that train a network to maximize the similarity of representation obtained from different augmented views of the same image [\[24,](#page-7-10) [38,](#page-8-5) [22,](#page-7-11) [7,](#page-7-1) [8\]](#page-7-0).
4
+
5
+ Recent studies have shown that self-supervised pretext task learning benefits from multi-task learning [\[16,](#page-7-12) [42\]](#page-8-6) such as performing clustering on the learned representations [\[58\]](#page-9-0). However, in spite of the explicit clustering, representations learned in such a way could still exhibit overlap between different classes, particularly for complex datasets with a large number of categories and long-tailed data distribution [\[31\]](#page-8-7). There are three reasons underlying this phenomenon. First, [\[10\]](#page-7-13) showed how the representations induced by traditional contrastive learning are inherently biased as each augmentation is contrasted against *all* other samples in the batch, including potentially those of the same underlying latent class. Second, clustering techniques based on euclidean metrics such as K-means [\[33\]](#page-8-8) or Gaussian mixture models [\[6\]](#page-7-14) struggle when operating on data lying on high-dimensional manifolds [\[26,](#page-7-15) [43,](#page-8-9) [49\]](#page-8-10). Finally, especially in imbalanced datasets, the loss of samples belonging to minority classes is distorted when their representations are contrasted with an excessive number of negative samples from other classes. This limits the learning signal to the network and prevents it from assigning similar representations to minority samples while keeping them sufficiently distinct from those of all the other samples.
6
+
7
+ In this paper, we thus improve simultaneous clustering and contrastive representation learning for imbalanced datasets by generalizing the debiased contrastive loss of [\[10\]](#page-7-13) to avoid under-clustering minority classes. An excessive amount of negative samples forces the formation of different clusters within the same category [\[55\]](#page-8-11), therefore we modified that loss with a smoothing terms that controls the influence of the contrasted negative samples, preventing the previously mentioned phenomenon of under-clustering. We then use these representations to directly perform clustering [\[57\]](#page-9-1). Our main contributions can be summarized as:
8
+
9
+ - We propose a joint framework for self-supervised learning of visual representations and image clustering. Our proposed method learns debiased contrastive visual representation and unsupervised clustering using divergence loss over the data distributions.
10
+ - We show empirical results to highlight the benefits of
11
+
12
+ <sup>\*</sup>Equal Contributions.
13
+
14
+ <sup>†</sup>mina.rezaei@stat.uni-muenchen.de, shekazizi@google.com
15
+
16
+ <span id="page-1-1"></span>![](_page_1_Figure_0.jpeg)
17
+
18
+ <span id="page-1-0"></span>Figure 1. Illustration of the proposed unsupervised debiased representation learning framework. Our method is composed of two parallel deep convolutional transformer architecture where the clustering network takes an original images and the representation network takes two augmented views of the image. The representation network first projects the augmented views onto an embedding space and then processes these representations in a MLP head, which generates the baseline for the pair-wise contrastive objective. Here, we scale the negative sampling strategy of [\[10\]](#page-7-13) by exponential weighting in order to create an excess of debiased negative samples that leads to under-clustering. The clustering network uses the extracted features from the encoder and employ a K-Means clustering with a KL-divergence loss with the students t-distribution as the soft assignments and a target distribution, as a function of the soft assignments citexie2016unsupervised.
19
+
20
+ avoiding under-clustering while learning representations using a multi-task learning loss. The model is able to distinguish distinct classes better or at least comparable to the state-of-the-art methods for several benchmark datasets and public medical datasets, characterized by long tailed distributions. Our method is evaluated in linear, semi-supervised, and unsupervised clustering settings on public datasets, achieving comparable or higher performance in comparison to state-ofthe-art contrastive learning methods in several tasks.
21
+
22
+ # Method
23
+
24
+ Fig. 1 shows our proposed method. An encoder is used to learn common representations, which are then processed by two parallel networks: the *representation network* and the *deep divergence clustering network*. The representation network learns representations using our modified debiased contrastive loss, while the clustering network ensures that the learned representations cluster faithfully. Each of these two modules comes with its own loss which is mixed following a parameter $\gamma$ into the main loss used for training:
25
+
26
+ $$\mathcal{L}_{MTL} = \mathcal{L}_{deb}^{mod} + \gamma \cdot \mathcal{L}_{clustering} \tag{1}$$
27
+
28
+ Before clustering and representation learning, images are encoded to a common representation in an embedding space Z by an encoder f. We used ConvMixer, an isotropic vision model that operates on patches, in order to preserve some local structure within each part of an image [52]. The input image is first divided into patches of size $p_s$ and dimension $d_h$ , which are then fed into series of convolutional mixing blocks consisting of subsequent depth-wise convolutions, in order to mix the spatial structure of the image, and pointwise convolutions, in order to mix channel locations. To this base architecture we added another residual connection from the output of the depth-wise convolution to the output of the point-wise convolution. In this sense, our backbone model differs from [52], as only one residual connection for spatial awareness was used and thus the original ConvMixer possesses only flexibility in regards to the depth of an image. By repeated mixing, stacking mixing blocks, an arbitrary large receptive field can be created as distant spatial structures are mixed together the more mixing blocks are used [52].
29
+
30
+ As shown in Fig. 1, our method takes the original image and creates two augmented views using two random transform functions $t_1,t_2$ . The augmented views are generated by applying random cropping, resizing and random Gaussian blurs sequentially on an image twice [7], where the resizing is meant to bring the dimensionality of the cropped image back to its input dimensions. The encoder network f then projects a sample image $x_i^{(j)}$ onto the common embedding space before a further MLP head f0 gives the final representations used with our modified contrastive loss. Previous work [] has shown that performance in downstream tasks benefits from using intermediate representations rather than those directly used for contrastive learning.
31
+
32
+ Specifically, from an input image $x_i$ we derive two representations $z_i^{(k)} := h(f(t_k(x_i))), k \in \{1,2\}$ , thus generating 2N representations from a mini-batch B with cardinality N. For convenience, we group the representations of all samples $j \neq i$ into $\bar{Z}_i = \{z_j^{(k)} | j \neq i, k \in \{1,2\}\}$ . The representation of a sample $z_i^{(k)}$ is contrasted via the cosine-similarity $sim(z_i,z_j) := (z_i^{\top}z_j)(z_i^{\top}z_i)^{-1}(z_j^{\top}z_j)^{-1}$ with temperature $\tau$ to the representations of all other samples, defining an average distance:
33
+
34
+ $$S_i^{(k)} = \frac{1}{|\bar{Z}_i|} \sum_{z \in \bar{Z}_i} \exp(sim(z_i^{(k)}, z)/\tau)$$
35
+ (2)
36
+
37
+ Due to the absence of training labels, $\bar{Z}_i$ can contain representations of samples belonging to the same category as sample i, leading to sampling biases in the regular contrastive learning setting. The contrastive loss proposed by [10] solves this problem, but is still vulnerable to the issue of underclustering of minority classes we discussed in the introduction.
38
+
39
+ To tackle this problem, we introduce a smoothing term $\lambda$ that softens the impact of the distance $D_i$ of the representations of sample i with those in $\bar{Z}_i$ :
40
+
41
+ <span id="page-2-0"></span>
42
+ $$\mathcal{L}_{deb,i}^{mod} = -2\log \frac{\exp(sim(z_i^{(1)}, z_i^{(2)})/\tau)}{\exp(sim(z_i^{(1)}, z_i^{(2)})/\tau) + (1 + D_i)^{\lambda}}$$
43
+ (3)
44
+
45
+ where
46
+
47
+ $$D_i = \sum_{k=1}^{2} \max \left\{ \exp(-1/\tau), \frac{1}{1-\tau^+} \left( S_i^{(k)} - \tau^+ \exp(sim(z_i^{(1)}, z_i^{(2)})) \right) \right\}$$
48
+
49
+ and $\tau^+$ is the prior probability that a sample belongs to the same class of x. Through $\lambda$ we control the emphasis on under-clustering, since an excessive amount of negative samples forces the formation of different clusters within the
50
+
51
+ <span id="page-3-1"></span>![](_page_3_Figure_0.jpeg)
52
+
53
+ <span id="page-3-0"></span>Figure 2. Illustration of the encoder architecture based on ConvMixer [53]. First an input image is divided into relatively small patches which are then processed in mixing layers, consisting of subsequent channel-wise and point-wise convolutions. In contrast to the original ConvMixer the convolutions are followed by a dropout layer in order to prevent over-fitting and to extract more salient features. Following the idea of residual networks [23] we added another residual connection (in red) to enable the point-wise convolution more flexibility during training and to increase the hypothesis space even further. Applying consecutive mixing layers allows the encoder to learn the global structure of the images and can be seen as a form of self attention.
54
+
55
+ same category [55], thereby tackling the problem of within cluster imbalance.
56
+
57
+ Applying Eq. 3 on a mini-batch B results in the loss for the representation learning network:
58
+
59
+ $$\mathcal{L}_{deb}^{mod} = \frac{1}{2N} \sum_{i=1}^{N} \mathcal{L}_{deb,i}^{mod} \tag{4}$$
60
+
61
+ The purpose of the clustering network is to refine and improve the learned contrastive representations such that they cluster properly. We assume that the dataset consists of a long tailed distribution of images over a known number of K categories. The clustering is refined by pushing the soft label assignments $q_{ij}$ towards the target distribution P by matching them via KL-divergence [57]:
62
+
63
+ $$\mathcal{L}_{clustering} = KL(P||Q) = \sum_{i=1}^{N} \sum_{j_1}^{K} p_{ij} log \frac{p_{ij}}{q_{ij}}$$
64
+ (5)
65
+
66
+ The embeddings $z_i := f_i(x_i)$ generated by the encoder are subject to a K-Means clustering algorithm, where the similarity of an embedded image to the cluster center $\mu_i$ ,
67
+
68
+ with $j \in K$ is measured by the Student's t-distribution [32]:
69
+
70
+ $$q_{ij} = \frac{(1+||z_i - \mu_j||_2^2/\alpha)^{-\frac{\alpha+1}{2}}}{\sum_{l=1}^K (1+||z_i - \mu_l||_2^2/\alpha)^{-\frac{\alpha+1}{2}}}$$
71
+ (6)
72
+
73
+ Where $\alpha$ denotes the degree of freedom and will be set to 1 throughout [32]. The cluster centers are initialized by performing standard K-Means clustering on the embeddings [57]. Raising the soft label assignments $q_{ij}$ to the second power and normalizing it by the cluster frequencies $u_j = \sum_{i=1}^N q_{i,j}$ generates an auxiliary target distribution P for self-supervision [57]. The single elements of P can be computed by:
74
+
75
+ $$p_{ij} = \frac{q_{ij}^2/u_j}{\sum_{j=1}^K q_{ij}^2/u_j} \tag{7}$$
76
+
77
+ $p_{ij}$ sharpens $q_{ij}$ and due to the normalization reduces bias from imbalanced clusters, while forced to learn the soft label assignments with high confidence [58].
2210.16834/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-10T13:45:05.746Z" agent="5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" version="17.4.6" etag="JVlNEE7I6HvB700DUAVL" type="device"><diagram id="S2WBNm3WSvH8KgISoQmr">7VlNc9sgEP01PjYDki1Zx8pO2ks/pslM2yOViMRUFirCsZ1f30WAjWS5UTpy08zYB1s8YIH3lmWFJ/5itX0nSJV/4CktJh5KtxN/OfG8WYThWwE7Dfgo1EAmWKohfABu2SM1IDLomqW0bjWUnBeSVW0w4WVJE9nCiBB80252z4v2qBXJzIjoANwmpKBHzb6yVOYanXvhAX9PWZbbkXEQ6ZoVsY2N4TonKd84kH898ReCc6mfVtsFLRR3lhfd7+ZE7X5igpZySAdPd3ggxdqszcxL7uxioQPwCoV4kzNJbyuSqJoNKAtYLlcFlDA8krrSZN+zLQX7cSZIymAiC15w0Rjzb5oP1NVS8J/U1pS8pE6HJRNgifESqhREYIj4nhWFYwmhIEAIcA4TYlJ5UqiKZkFUSLo9SQreUw0uSvmKSrGDJqaDh+a6i3FP7Bu1No7YBsodnS1GjHtle8sHBeDBiNAviD9AkDJ9q5wYSklB6polbR0EX5epEmCp+IA1i903Vbjyw8gC3wF4g64QnlpkuXU7LHdu6TMVDFZChQH1lGiatfeDJCKjhotgMOkOqbMeUi0maEEke2iP2Me0GeEzZzCwoylqaxp2xKr5WiTU9HJ3TMeQH4V/NqRZODLUCL9f9iBfmI7tCyd0+39EitoieXj2dyJ1DeGuofFEmr3aCBpFCJ0rgkbei0XQoEeQoJBq/bzxkXtNP6C/1uqcje8guNVQ8ZFu4PsLX5HyUAlPmfltjDALbC0CE2LdVoDp0SzccQmgVna0L1im1EqAYhVoYyUAg5zjralYsTRV3WNBa/ZIfjSmlF6V8uKGsFk8mS2VrbXktc6a8ClPcZ3BQCMI789PhFlHeK9HeG8E4cN/JPzuInzfju9G3H8n/PxZ56RZ88lDUsl3Q1asUCvp85BXcI5OwyvkfPA4qc8TZoOznbHRReBOEurjq/l8dIGfMHs+gW2wcBR+QwbEzqEBbniM7cvQnu0+Z0iocOetIzgOr0GPv/kjhFeMj8S5aOPmPMh7OW36LnBGTHrqqqltsOYFE7pgJ9dx6191CqRmfGvmiEZwiv3bu3UKa+KJA2KMfAj3XSKd2Sm8i1MMiRRRxymOI8XZnGLAbVKdk6qJrWTXcKio+kllktvwChRZPmpeKNOxoTWwvFNx/UA1/bhXix6euwIKLol0BXXuH7U7ODeScV2S6o7rNEQP+jJHdNh5A5odizvtEXc6hrh9t1Bn2vGfLjt9gDPMg05OMD9yhpFeh6F4+L9I596HP938698=</diagram></mxfile>
2210.16834/main_diagram/main_diagram.pdf ADDED
Binary file (13.6 kB). View file
 
2210.16834/paper_text/intro_method.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default to , , or . You are strongly encouraged to include a **justification to your answer**, either by referencing the appropriate section of your paper or providing a brief inline description. For example:
4
+
5
+ - Did you include the license to the code and datasets?
6
+
7
+ - Did you include the license to the code and datasets?
8
+
9
+ - Did you include the license to the code and datasets?
10
+
11
+ Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below.
12
+
13
+ 1. For all authors\...
14
+
15
+ 1. Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
16
+
17
+ 2. Did you describe the limitations of your work?
18
+
19
+ 3. Did you discuss any potential negative societal impacts of your work?
20
+
21
+ 4. Have you read the ethics review guidelines and ensured that your paper conforms to them?
22
+
23
+ 2. If you are including theoretical results\...
24
+
25
+ 1. Did you state the full set of assumptions of all theoretical results?
26
+
27
+ 2. Did you include complete proofs of all theoretical results?
28
+
29
+ 3. If you ran experiments\...
30
+
31
+ 1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
32
+
33
+ 2. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
34
+
35
+ 3. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
36
+
37
+ 4. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
38
+
39
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets\...
40
+
41
+ 1. If your work uses existing assets, did you cite the creators?
42
+
43
+ 2. Did you mention the license of the assets?
44
+
45
+ 3. Did you include any new assets either in the supplemental material or as a URL?
46
+
47
+ 4. Did you discuss whether and how consent was obtained from people whose data you're using/curating?
48
+
49
+ 5. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
50
+
51
+ 5. If you used crowdsourcing or conducted research with human subjects\...
52
+
53
+ 1. Did you include the full text of instructions given to participants and screenshots, if applicable?
54
+
55
+ 2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
56
+
57
+ 3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
58
+
59
+ [^1]: Corresponding author
2301.02311/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2301.02311/paper_text/intro_method.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Understanding human activity in video is a fundamental vision problem with abundant applications in augmented reality, robotics, and information retrieval. The field has made exciting advances, from new models for recognition [\[24,](#page-8-0) [53,](#page-10-0) [86\]](#page-11-0) and self-supervised representations [\[55,](#page-10-1) [58,](#page-10-2) [61,](#page-10-3) [90\]](#page-11-1) to major datasets [\[16,](#page-8-1) [34,](#page-9-0) [63,](#page-10-4) [74,](#page-10-5) [106\]](#page-12-0). Nonetheless, activity understanding in video lags noticeably behind object understanding in images, where today's AI models compete well with people.
4
+
5
+ One key reason for this discrepancy is the fact that whereas objects present themselves directly in the pixels no subtext required—activity naturally has broad temporal
6
+
7
+ Website: <https://vision.cs.utexas.edu/projects/hiervl/>
8
+
9
+ <span id="page-0-0"></span>![](_page_0_Figure_9.jpeg)
10
+
11
+ Figure 1. Conventional video-language embeddings are trained to match short-term clips with their corresponding descriptions, e.g., open tap (in orange boxes), thus capturing *what is happening*. Our hierarchical video-language embedding (in dotted blue box) learns both short-term and long-term visual-text relations, thereby capturing *why is it happening* (e.g., making salad dressing). Long-term intent is conveyed by textual summaries (blue) that give an abstractive summary of the whole video, and complement the more literal step-by-step narrations (green).
12
+
13
+ context rooted in the human actor's (latent) intentions. Not only does an activity stretch across video frames, but also its interpretation relies on the larger context of what the person is trying to accomplish. Thus, there is a natural *hierarchy* of information in video, starting with the short-term "what the person is literally doing right now" (e.g., reaching for the stove) and going all the way to the long-term "what the person aims to do" (e.g., cook dinner).
14
+
15
+ As a step towards capturing this hierarchy, we explore video-language representation learning. Video often has accompanying timestamped text, whether from spoken narrations in a how-to video [\[63,](#page-10-4) [75,](#page-10-6) [106\]](#page-12-0), closed caption text and scripts [\[9,](#page-8-2)[76\]](#page-10-7), or deliberate text annotations [\[16,](#page-8-1)[34,](#page-9-0)[91\]](#page-11-2). Existing video-language models learn a correspondence between the two modalities by matching short video segments with their text counterpart, typically with a learned embed<span id="page-1-1"></span>ding [\[3,](#page-8-3)[55,](#page-10-1)[61,](#page-10-3)[90\]](#page-11-1) that produces a language-enriched video clip encoder. However, this standard approach risks capturing only the short-term actions. Granular comments such as *"now I pour milk in the pan"* or *"he picked up a water hose"* fail to capture the overall goal of the activity, like *making a coffee* or *cleaning a car*. As a result, at inference time their encodings for unseen videos can be myopic and miss sequential dependencies between observed events.
16
+
17
+ To tackle this problem, we introduce HierVL: a novel hierarchical video-language model that captures both shortterm actions and long-term intents in video. Unlike standard video-language embeddings, our method aims to simultaneously capture the immediate observed actions as well as their contribution to the longer-term goal. To that end, given training video accompanied by timestamped clip-level text descriptions as well as global (video-level) text *summaries*, HierVL learns a video-text embedding for hierarchical temporal understanding using two layers of contrastive learning. The top (parent) layer encourages the *aggregated video clips* to be close to the overarching textual summary (e.g., *he makes spaghetti dinner*), while the bottom (child) layer trains individual clips to be similar to their respective descriptions (e.g., *he turns on the cooker*). See Fig. [1.](#page-0-0)
18
+
19
+ To our knowledge, ours is the first work to create a hierarchical video-language embedding. Our idea to blend abstract textual summaries with literal text descriptions is new. Furthermore, our model design addresses constituent technical challenges—namely, we circumvent the typical expense of long-term feature learning [\[4,](#page-8-4) [43,](#page-9-1) [86\]](#page-11-0) by using aggregation of short-term features, and we show how to jointly train with two levels of annotation in a way that staves off catastrophic forgetting of either layer.
20
+
21
+ This hierarchical training yields not only global videolevel representations that capture long-term information (e.g., intent and temporal dependencies), but also clip-level video features that are more expressive than those traditionally learned via single-level schemes. This happens by means of our parent-child learning framework, which requires the aggregation of clip features within a video to match the long-term context captured by the summary.
22
+
23
+ We demonstrate our model by training with the narrations and summaries in the 3,670-hour egocentric video dataset Ego4D [\[13,](#page-8-5) [34\]](#page-9-0). We show that HierVL outperforms strong baselines and state-of-the-art methods for multiple video benchmarks, successfully transferring its pretrained representation for inference on Charades-Ego [\[74\]](#page-10-5), EPIC-KITCHENS [\[16\]](#page-8-1), and HowTo100M [\[63\]](#page-10-4).[1](#page-1-0) We evaluate our representations on both hierarchy levels. In particular, at the time of submission, HierVL achieves state-of-theart performance on Ego4D Long Term Anticipation (LTA), Charades-Ego Action Recognition, EPIC-KITCHENS-100 Multi-Instance Retrieval (zero-shot and fine-tuned settings), and HowTo100M Long Video Classification.
24
+
25
+ # Method
26
+
27
+ We propose HierVL, a novel video-language model that captures both clip- and video-level relations. Fig. 2 overviews our method. Next, we describe the annotations (Sec. 3.1), formalize the embedding learning approach (Sec. 3.2), and discuss the feature aggregation strategy (Sec. 3.3). Finally, we describe the loss function (Sec. 3.4), training process (Sec. 3.5), and implementation details (Sec. 3.6).
28
+
29
+ Consider a hierarchically annotated video dataset, $\mathcal{D}_L = \{(V_i, N_i, S_i)\}_{i=1}^{|\mathcal{D}_L|}$ where $V_i$ is a long video, $N_i$ is a sequence of text narrations describing every atomic action in the video, and $S_i$ is a high-level text summary for the whole video. Notationally, $V_i = \{v_{ij}\}_{j=1}^{|V_i|}$ is an ordered collection of short clips v (each spanning a few seconds) and $N_i = \{n_{ij}\}_{j=1}^{|N_i|}$ is an ordered collection of narrations n. Note that there is no constraint on the temporal span of the video $V_i$ , but in our experiments they are typically in minutes. As an illustration, $n_{ij}$ can be "he cleans the painting brush" or "he rubs the excess paint" whereas high-level summary $S_i$ will be "he was painting in a drawing room". The clip $v_{ij}$ contains a visual demonstration of the narration $n_{ij}$ , whereas $S_i$ is an abstractive summary of the full video $V_i$ . The idea is for clip-level representations to capture finegrained actions in a video, while video-level representations should capture the overall goal of the task.
30
+
31
+ We leverage the Ego4D dataset [13, 34] for training our model. Ego4D consists of 3,670 hours of wearable camera video of daily-life activity, as captured by 931 unique camera wearers around the world. Among the Ego4D annotations are text descriptions ("narrations") of every action performed by the camera wearer, as well as video-level text summaries, which meet our requirements for N
32
+
33
+ and *S*, respectively. The free-form narrations are written at timepoints selected by the annotators to capture every action performed. Specifically, annotators first watched a full 5-minute video and wrote a short 1-3 sentence summary for the overall activity and environment. Then annotators were asked to pretend they were describing everything occurring in the video to a friend on the phone who cannot see the video. The result is a temporally dense play-by-play description—13.2 sentences per minute on average, for a total of 3.85M sentences (see Appendix D in [34] for details).
34
+
35
+ In our hierarchical setup, we have short-term video segment v and short-term text n. We want to learn short-term representations $f_v(v)$ and $f_n(n)$ , which we refer to as the visual short-term features and the textual short-term features. At the long-term level, we have V and N as a collection of multiple v and multiple v, respectively. Simultaneously, we want to learn long-term representations $f_V(V)$ and $f_N(N)$ (referred to as long-term visual feature and long-term text feature, respectively). Finally, we have $f_n(S)$ , the long-term summary feature, which is typically a few sentences long and hence is also encoded with $f_n$ .
36
+
37
+ The goal is to project v, n, V, N, S into a common space such that semantically related features are close. Mathematically, for any suitably selected similarity metric sim() and $\forall i_1, i_2, j_1, j_2$ such that $(i_1, j_1) \neq (i_2, j_2)$ , we would like to fulfill a *child-level* matching constraint:
38
+
39
+ <span id="page-2-3"></span>
40
+ $$sim(f_v(v_{i_1j_1}), f_n(n_{i_1j_1})) > sim(f_v(v_{i_1j_1}), f_n(n_{i_1j_2}))$$
41
+ (1)
42
+
43
+ and $\forall i, j$ such that $i \neq j$ , as well as *parent-level* matching constraints:
44
+
45
+ <span id="page-2-5"></span><span id="page-2-4"></span>
46
+ $$sim(f_V(V_i), f_n(S_i)) > sim(f_V(V_i), f_n(S_j))$$
47
+ (2)
48
+
49
+ $$sim(f_N(N_i), f_n(S_i)) > sim(f_N(N_i), f_n(S_i)).$$
50
+ (3)
51
+
52
+ Overall, Eq. 1 implies corresponding short-term representations should have higher similarity than non-matching ones, Eq. 2 (and Eq. 3) implies a video (and narrations) should have a higher similarity with its summary than with other summaries. Note that since we project both short-term and long-term features into a common space, we are allowing features even at different hierarchical levels to come close in the embedding space if they are semantically similar.
53
+
54
+ Obtaining long-term features is challenging in both visual and text modalities. Directly computing a long-term visual feature requires more resources due to its large video size and often leads to inferior performance and memory overflows [4, 43, 84, 86]. Self-attention models are suitable architectures for capturing long-term dependencies, but
55
+
56
+ <span id="page-3-2"></span><span id="page-3-0"></span>![](_page_3_Figure_0.jpeg)
57
+
58
+ Figure 2. Schematic representation of our proposed approach. In the clip-level contrastive learning setup (top), we match video clips with their corresponding narrations. The selected clips in one batch are from different videos, as shown. In our novel parent-level contrastive learning setup (bottom), we sample short-term features and aggregate them into a long-term representation followed by contrastive matching with the summary feature. These clips are sampled from the same video. Note that $f_v$ and $f_n$ are common in both stages, and also trainable in both. (For simplicity, figure only shows positive pairs in the contrastive setup.)
59
+
60
+ they are challenging to apply to large collections of text sentences (e.g., long documents) due to quadratic dependence on the token sequence length in transformer models [18]. Longformer [6] mitigates this problem by multi-level global and local attentions.
61
+
62
+ Taking inspiration from these works in both visual and textual domains, we use aggregations of short-term features as long-term representations $f_V$ and $f_N$ . Following this strategy, we define the long-term visual representation $f_V$ as $f_V(V_i) = Agg\left(\{f_v(v_{ij})\}_{j=1}^{|V_i|}\right)$ . Similarly, the long-term textual representation $f_N$ is defined as $f_N(N_i) =$ $Agg\left(\left\{f_n(n_{ij})\right\}_{j=1}^{|N_i|}\right)$ . We consider two aggregator functions Agg(.). The first uses a self-attention transformer block in order to capture long-term dependencies over the entire video. We use positional encodings in order to provide the model with the ability to embed temporal order information in the video-level representation. We denote with **HierVL-SA** the variant of our model based on this selfattention aggregator. The second form of aggregation that we consider is simple average pooling (i.e., a parameter-free aggregator), which produces long-term features with equal contributions from all short-term features. This aggregator does not preserve order information. We name his version HierVL-Avg. We use the same aggregator in both modalities since f(v) and f(n) have the same dimensions (and, in fact, equal values for matching visual-text pairs in an ideal contrastive training).
63
+
64
+ As introduced previously, we learn the representations at two levels—child-level $f_v$ , $f_n$ and parent-level $f_V$ , $f_N$ . For child level representations, the pretraining objective is similar to prior work [55,61,63,90] that relates short-term visual representations to short-term textual representations. In particular, we use a variant of EgoNCE [55], an action- and scene-aware variation of InfoNCE [66]. EgoNCE groups similar actions as positives and temporally close distinct actions as hard negatives. In contrast, we omit the latter, since our hierarchical setup ought to bring together distinct actions with the same camera-wearer intent. Overall, the short-term pretraining objective is:
65
+
66
+ $$\mathcal{L}_{child} = \frac{1}{|\tilde{\mathcal{B}}|} \sum_{i \in \tilde{\mathcal{B}}} \log \left( \frac{\sum_{j \in \tilde{\mathcal{P}}_i} \exp(f_v(v_i)^T f_n(n_j))}{\sum_{j \in \tilde{\mathcal{B}}} \exp(f_v(v_i)^T f_n(n_j))} \right)$$
67
+
68
+ where $\tilde{\mathcal{B}}$ is the overall set of short-term features and $\tilde{\mathcal{P}}$ is the per-instance set of action-aware positive samples (see [55] for details). See Fig. 2 (top).
69
+
70
+ <span id="page-4-4"></span>At the parent level, we use a similar pretraining objective between S-V and S-N. See Fig. 2 (bottom). As discussed in Sec. 3.3, we aggregate v to obtain V (and aggregate n to get N). Since the short-term matching already contrasts v and n, we do not contrast $f_V$ and $f_N$ again at the parent-level. Overall, the long-term pretraining objective is $\mathcal{L}_{parent} = \mathcal{L}_{parent}^{SV} + \mathcal{L}_{parent}^{SN}$ where
71
+
72
+ $$\mathcal{L}_{parent}^{SV} = \frac{1}{|\tilde{\mathcal{B}}|} \sum_{i \in \tilde{\mathcal{B}}} \log \left( \frac{\sum_{j \in \tilde{\mathcal{P}}_i} \exp(f_V(V_i)^T f_n(S_j))}{\sum_{j \in \tilde{\mathcal{B}}} \exp(f_V(V_i)^T f_n(S_j))} \right)$$
73
+
74
+ and similarly for $\mathcal{L}_{parent}^{SN}$ . For the parent-level feature, negatives for a summary text $S_i$ are both visual and textual representations chosen from outside the temporal span of $S_i$ .
75
+
76
+ So far, we discussed our approach for hierarchical videolanguage pretraining. To realize this setup, we employ a joint training approach. First, we train m batches of shortterm visual and textual pairs (v,n) — thus training $f_v$ and $f_n$ . Subsequently, we train one batch of long-term features — thereby training $f_V$ and $f_N$ . Recall that $f_V(.) =$ $Agg(f_v(.))$ and $f_N(.) = Agg(f_n(.))$ . Therefore, in this batch, we update the weights of Agg as well as short-term $f_v$ and $f_n$ . The contrastive objective is detailed in Sec. 3.4.
77
+
78
+ The motivation behind training both levels of annotations together is to ensure the functions $f_v$ and $f_n$ optimize for both short-term and long-term features, i.e., both are influenced by the text summaries. Other alternatives are (a) using separate models for clip-level and video-level features, but that increases the parameters in the model and makes the training difficult (both in terms of convergence and GPU usage), and (b) training with only clip-level data and fine-tuning it for video-level (or vice-versa), but such strategies are known to lead to catastrophic forgetting [33,41,42].
79
+
80
+ Fig. 3 visualizes the learned features for 500 summary texts and their child narrations using our $f_n$ (left) and EgoVLP's features (right). While summary features in EgoVLP are unrelated to the narrations, HierVL captures their natural hierarchy, as seen by the colors clustering together in the embedding space. This reshaping of the features reflects how our clip-level features convey context about the higher-level intent of the camera wearer.
2302.09170/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2302.09170/paper_text/intro_method.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Large pre-trained language models (PLMs) [@radford2019language; @lewis2020bart; @raffel2020exploring] have achieved great success across all NLP tasks. However, recent studies also reveal that PLMs are susceptible to memorizing the pre-training corpora rather than capturing the knowledge within them [@niven2019probing; @talmor2020olmpics; @yasunaga2022linkbert; @li2022instilling]. Particularly for generation tasks, PLMs are notorious for hallucinating text that is factually incorrect or hard to verify [@logan2019barack; @sun2020ernie; @lin2020commongen; @longpre2021entity]. To address these issues, one approach is to retrieve relevant knowledge and integrate it explicitly with PLMs [@he2020bert; @liu2021kg]. Another direction is incorporating the additional knowledge sources into the pre-training step [@zhang2019ernie; @xiong2019pretrained; @liu2021knowledge; @wang2021kepler]. While the former suffers from the issue of falling back on the models themselves without retrieved information [@krishna2021hurdles], knowledge-focused pre-training can be complementary to those methods [@longpre2021entity] and shows its advantage on generalization.
4
+
5
+ <figure id="fig:object" data-latex-placement="t">
6
+ <embed src="imgs/general.pdf" />
7
+ <figcaption>The illustration of the proposed KILM technique for injecting knowledge into PLMs. In the given example, the mention, <em>Joker</em>, is linked to the page of Wikipedia entity <em>Joker (character)</em>. While the figure only shows knowledge infilling, knowledge masking, and masked knowledge reconstruction steps, the proposed method is combined with the original pre-training objectives of PLMs for continued pre-training.</figcaption>
8
+ </figure>
9
+
10
+ In this paper, we propose an approach for injecting knowledge into encoder-decoder PLMs, such as BART, as a continued pre-training process. We refer to it as *Knowledge Injection into Language Models* (KILM). Instead of introducing additional parameters to PLMs or modifying the model architectures to incorporate additional knowledge, KILM infills knowledge sentences by adopting a novel knowledge infilling objective that includes a knowledge reconstruction step in addition to the original pre-training objectives of BART.
11
+
12
+ The aim of KILM is to teach PLMs additional content about concepts and entities that they encounter in a given context, so that the models are able to ground an entity mention with additional information and "describe" what that entity is (see [1](#fig:object){reference-type="ref+Label" reference="fig:object"}). It should be emphasized that in this process, the context is especially important for cases when an entity mention can refer to multiple entities, e.g., *Titanic* which can refer to the *British ship* or to the *1997 movie*. We utilize the *short descriptions* of entities in Wikipedia which comprise of entity definitions as the knowledge source ([3.1](#sec:preliminary){reference-type="ref+Label" reference="sec:preliminary"}). Although there are existing works leveraging similar knowledge for PLM enhancement, they ignore the relationship among entities, contexts, and entity-centric knowledge, and restrict their applications to NLU tasks. In contrast, we propose a distinct structure ([3.2](#sec:kilm){reference-type="ref+Label" reference="sec:kilm"}) to augment Wikipedia articles with short descriptions of the entity mentions in the context, thus model this essential relationship, so as to force PLMs to learn the correlation among entities and contexts, and differentiate between the entities with similar surface forms during continued pre-training. With recent work that highlights the need for explicit grounding for PLMs to truly understand text [@merrill2021provable], we posit that KILM takes a step in that direction.
13
+
14
+ The proposed structure for knowledge infilling in KILM is further leveraged as a structured prompt in downstream tasks (see [4.2](#sec:kn_tasks){reference-type="ref+Label" reference="sec:kn_tasks"}). We demonstrate better knowledge retention with KILM in zero-shot for entity disambiguation and appositive generation tasks, showing the effectiveness of the proposed method. Even without the distinct structure, we also find that BART with KILM outperforms BART on QA tasks and is less prone to hallucination on tasks such as knowledge-grounded response generation. As mentioned earlier, KILM relies on continued pre-training of PLMs, which presents the possibility of catastrophic forgetting of original skills of the PLM. We mitigate this by retaining the original training objectives of BART during the continued pre-training stage. We empirically verify that our proposed objective does not degrade the general language modeling ability of the PLM, nor affect the fluency of these models for natural language generation (NLG) tasks. Although we focus on short descriptions of entities as the knowledge source for KILM, other forms of knowledge can also be used, which we leave for future exploration.
15
+
16
+ We summarize our contributions as follows:
17
+
18
+ \(1\) We propose a novel approach, KILM, to leverage Wikipedia annotations in pre-training of PLMs. We inject knowledge into BART, solely through continued pre-training, with no change in the architecture of the PLMs. KILM enables entity-based knowledge injection with knowledge in natural-language form. KILM's distinct structure also offers a direct way to probe the entity knowledge retained in pre-trained models.
19
+
20
+ \(2\) We show that KILM enhances the performance of BART on knowledge-intensive tasks while maintaining its original performance on other downstream tasks. KILM demonstrates improved zero-shot performance on entity disambiguation task, outperforming state-of-the-art models having 30x more parameters.
21
+
22
+ # Method
23
+
24
+ Although KILM is model-agnostic and could be used for any PLM (more on this in [5.0.0.4](#sec:other-plm){reference-type="ref+label" reference="sec:other-plm"}), in this work, due to high computation costs, we focus on applying KILM to BART [@lewis2020bart].
25
+
26
+ Wikipedia is a widely-used text corpus for LM pre-training. It is often processed as a collection of individual articles in the form of flat natural language text. However, due to the existence of hyperlinks in its text, Wikipedia is also a complex web of connected Wikipedia topics, also known as Wikipedia entities. These hyperlinks build connections between different Wikipedia entities and establish a rich source of information that is mostly ignored in current pre-training approaches. Moreover, most Wikipedia articles come with a *short description* of the entity (topic) discussed in the article. These short descriptions provide definitions for Wikipedia entities. In this work, we take an initial step towards using these additional information within Wikipedia articles and utilizing "short descriptions" of entities for continued pre-training of PLMs. Note that the proposed approach could be expanded to **other annotated text corpora**.
27
+
28
+ We propose KILM, which extends the text-infilling objective to knowledge infilling objective through continued pre-training. KILM, as shown in [1](#fig:object){reference-type="ref+Label" reference="fig:object"}, consists of three steps: (1) *knowledge infilling*, (2) *knowledge masking*, and (3) *masked knowledge reconstruction*.
29
+
30
+ As mentioned in [3.1](#sec:preliminary){reference-type="ref+Label" reference="sec:preliminary"}, in this work, we mainly focus on injecting PLMs with hyperlinks and entity descriptions as the entity-related knowledge into PLMs. Specifically, we process Wikipedia data such that entity mentions in Wikipedia articles (which are annotated by hyperlinks) are marked with a start-of-entity token `<ent>` and an end-of-entity token `</ent>`. Also, each entity mention is followed by an entity-related knowledge sentence marked with `<ent_desc>` and `</ent_desc>` as start- and end-of-description tokens. The inserted knowledge component (highlighted in blue in [1](#fig:object){reference-type="ref+Label" reference="fig:object"}) consists of the corresponding hyperlinked entity (which might be different from the entity's surface form in the text) and the entity's short description connected with the `<sep>` token, where the short description is obtained from a lookup table extracted from the Wikipedia dump. We denote this knowledge infilling transformation as [KnInfill]{.smallcaps}.
31
+
32
+ The processed data is used for the continued pre-training of a PLM. During this step, we conduct knowledge masking transformation (denoted as [KnMask]{.smallcaps}) and the model is trained to reconstruct the whole inserted knowledge component from a single `<mask>` token with respect to the context. More specifically, assuming the $i$th token $t_i$ is a mention of an entity, the masked input sequence $\mathbf{X}$ and the output sequence $\mathbf{Y}$ can be denoted as: $$\begin{align*}
33
+ \begin{split}
34
+ \mathbf{X} =& \{t_1, ..., t_{i-1}, \colorbox{myorange}{\texttt{\small<ent>}}, t_i, \colorbox{myorange}{\texttt{\small</ent>}, \texttt{\small<ent\_desc>}}, \\
35
+ & \colorbox{myblue}{\texttt{\small<mask>}}, \colorbox{myorange}{\texttt{\small</ent\_desc>}}, t_{i+1}\, ..., t_N\},
36
+ \end{split}
37
+ \\
38
+ \begin{split}
39
+ \mathbf{Y} =& \{t_1, ..., t_{i-1}, \colorbox{myorange}{\texttt{\small<ent>}}, t_i, \colorbox{myorange}{\texttt{\small</ent>}, \texttt{\small<ent\_desc>}}, \\
40
+ & \colorbox{myblue}{$k_1$, ..., $k_L$}, \colorbox{myorange}{\texttt{\small</ent\_desc>}}, t_{i+1}\, ..., t_N\},
41
+ \end{split}
42
+ \end{align*}$$ where $t_n$ represents the $n$th token of the original target sequence and $k_l$ represents the $l$th token in the knowledge sequence of length $L$.
43
+
44
+ ::: table*
45
+ :::
46
+
47
+ The parameters $\theta$ of the PLM are optimized by a masked knowledge reconstruction loss:
48
+
49
+ $$\begin{equation*}
50
+ \mathcal{L}_{kn} = \mathbb{E}\left(\sum_{l=1}^{L}-\log\left( p\left(k_l|t_{1:(i+l+2)}, \mathbf{X}, \theta\right)\right)\right).
51
+ \end{equation*}$$
52
+
53
+ Since our goal is to inject entity-related knowledge without disrupting the function of the original BART as a general PLM, the masked knowledge reconstruction loss is combined with the original text infilling objective of BART during continued pre-training.[^3] At training time, the model is optimized by minimizing the reconstruction loss over the whole target sequence instead of only the recovered masked spans. As a result, the training objectives force the model to learn to copy the tokens from the input sequences when the token is not a mask token during the pre-training process. This is to help the model recognize the inserted knowledge components in the training sequences and ensure the fluency of the PLM on NLG tasks. The weights of different objectives for loss are calculated based on the proportion of the corresponding spans across the entire sequence. We summarize the proposed KILM algorithm in [10](#sec:appx:alg){reference-type="ref+Label" reference="sec:appx:alg"}.
54
+
55
+ The advantages of leveraging this structure for training are two-fold. First, this structure builds an alignment between the entity-related knowledge and the corresponding mention in the paragraphs. Second, the injected knowledge can be easily induced by probing the PLM with the structured prompts proposed for KILM ([4.2](#sec:kn_tasks){reference-type="ref+Label" reference="sec:kn_tasks"}).
56
+
57
+ ::: table*
58
+ :::
2303.09032/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2303.09032/main_diagram/main_diagram.pdf ADDED
Binary file (45 kB). View file
 
2303.09032/paper_text/intro_method.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In recent years multi-agent reinforcement learning (MARL) has drawn much attention and has shown high potential to be applied to various real-world scenarios, such as transportation [@seow2009collaborative], robotics [@huttenrauch2017guided; @perrusquia2021multi], and autonomous driving [@cao2012overview; @shalev2016safe]. Cooperative MARL is concerned with a special learning setting with common rewards shared across all agents, where agents must coordinate their strategies to achieve a common goal. There are several major challenges posed by this setting, such as credit assignment, scalability, non-stationarity, and partial observability. To address these challenges, @bernstein2002complexity propose the Centralized Training with Decentralized Execution (CTDE) learning paradigm. In this paradigm, information is shared across agents during training, guiding the learning of individual agent's policies and promoting cooperation during training, while agents still being able to run independently during decentralized execution.
4
+
5
+ One important line of research in CTDE is value decomposition, which learns a centralized action-value function that can be factorized into the individual utility function (often referred to as individual Q-function) of each agent. The centralized value function performs implicit credit assignment to determine each agent's contribution towards common returns, and learns implicit inter-dependencies to encourage cooperation. To ensure the centralized policy is aligned with individual policies, @son2019qtran propose the Individual-Global-Max (IGM) principle that guarantees consistency between global and local greedy actions. A common approach to value decomposition is to learn a mixing network that computes the centralized action value from the utilities of all agents. Depending on the specific way to satisfy IGM, different methods have been introduced, including VDN [@sunehag2017value], QMIX [@rashid2018qmix], QTRAN [@son2019qtran], and QPLEX [@wang2020qplex].
6
+
7
+ Cooperative exploration adds another level of difficulty to the single-agent exploration challenge. In cooperative MARL, agents need to coordinate to explore the large joint state-action space as high-performing joint strategies may require a high degree of collaboration among them. In addition, there may exist different types of cooperative strategies associated with a task. Sound cooperative exploration methods should be able to identify the optimal strategy from potentially many sub-optimalities. For instance, if the task for a group of robots is to deliver packages in a warehouse, the optimal strategy is that at each time several agents together lift a heavy package that one single agent cannot lift, meanwhile, each other spare agent carries one small package. In this task, delivering items either only collectively or only separately is sub-optimal, even though either way agents achieve a cooperative strategy. Therefore cooperative exploration is challenging in cooperative MARL. Although directed exploration strategies have been widely studied in multi-armed bandit and single-agent RL settings, they fail to account for cooperation among agents. Moreover, it is not straightforward to adopt single-agent methods to cooperative MARL, due to the exponentially large state-action space and multi-agent credit assignment. The popular $\varepsilon$-greedy strategy has been shown to be ineffective in complex MARL coordination tasks [@wang2020qplex].
8
+
9
+ Some recent works encourage cooperative exploration in MARL settings by maximizing the correlation among agents' behaviour, which trains each agent's policy to account for influences from other agents, hence agents achieve effective collaborative exploration behaviour. Correlation maximization is often realized by maximizing the mutual information (MI) between some quantities that can determine or reflect agents' behaviours, such as the trajectory history of each agent. Utilizing this idea, some works have been proposed and empirically outperformed the value decomposition baselines across various benchmark tasks [@jaques2019social; @mahajan2019maven; @wang2019influence; @kim2020maximum; @li2021celebrating]. However, two major issues remains when MI-based methods are used. First, optimizing the MI quantity for every pair of agents is not scalable because the required computation to optimize all MI losses grows quadratically as the number of agents. Second, agents could learn different types of cooperative strategies, and one particular type may not lead to high performance. As pointed out by @li2022pmic, simply maximizing the MI may not lead to high returns because agents may learn sub-optimal joint strategy, regardless of how strong the correlation they achieve.
10
+
11
+ In this work, we seek to explicitly leverage inter-agent dependencies to drive cooperative exploration. Our insight is simple: as a complement to implicit dependencies learned by centralized training, if each agent's optimism estimate explicitly encodes a structured dependency relationship with other agents, by performing optimism-based exploration, agents would be guided to effectively explore cooperative strategies. It is worth noting that centralized training (CT) only requires the joint policy to output the joint actions of all agents at the same time, without posing restrictions on how the controlling algorithm performs internal calculations or what information is allowed to be shared across agents. During CT, assuming at *each environment timestep* agents compute actions according to a sequential pre-determined order before executing them simultaneously, we can view the action computation sequence as a path from the root to a leaf of a tree. At each node of the tree, we consider the preceding agent's action as the parent node of the current agent. We revisit the idea of UCT exploration [@kocsis2006bandit] proposed in the perfect-information game setting, where the game state is accessible at all nodes, and take inspiration from it to encourage cooperative exploration in MARL. We develop a method called Conditionally Optimistic Exploration (COE). Essentially, COE performs optimism-based exploration by computing the upper confidence bounds of each action for the current agent, conditioned on the visitation count of its parent node (i.e., preceding agents' actions). To obtain decentralized agents in the decentralized execution (or deployment) phase, COE is not applied, i.e., we disable exploration by removing the optimistic bonus terms.
12
+
13
+ In the subsequent sections, we first review the background on MARL and the UCT algorithm. We then describe how conditional optimism can be applied to the MARL setting to encourage cooperative exploration. We build COE on commonly used value decomposition methods, and utilize the hash-based counting technique [@tang2017exploration] to enable counting the visitations in continuous state-action domains. Our empirical results on various benchmark domains show that our method is more effective than well-known baselines in challenging exploration tasks, and matches baseline performance in general MARL tasks. Our source code is available at <https://github.com/chandar-lab/COE>.
14
+
15
+ We model the cooperative multi-agent task as a Dec-POMDP (Decentralized Partially Observable Markov Decision Process) [@oliehoek2016concise], which is formally defined as a tuple $G = \langle \mathcal{S}, \mathcal{A}, P, R, \Omega, O, n, \gamma \rangle$, where $\mathcal{S}$ is the global state space, $\mathcal{A}$ is the action space, $\Omega$ is the observation space, $n$ is the number of agents in the environment, and $\gamma \in [0,1]$ is the discount factor. At each timestep $t$, on state $s \in \mathcal{S}$ each agent $i \in \mathcal{N} \equiv \{1,\dots,n\}$ takes an action $a_i \in \mathcal{A}$. The joint action $\mathbf{a} = [a_i]^n_{i=1} \in \boldsymbol{\mathcal{A}} \equiv \mathcal{A}^n$ leads to the next state $s'$ sampled from the transition probability $P(s'|s, \mathbf{a}): \mathcal{S}\times \boldsymbol{\mathcal{A}} \times \mathcal{S}\rightarrow [0,1]$, and obtains a global reward $r$ according to the reward function $R(s, \mathbf{a}): \mathcal{S}\times \boldsymbol{\mathcal{A}} \rightarrow \mathbb{R}$ shared across all agents. Each agent $i$ has a local policy $\pi_i(a_i|s): \mathcal{S}\times \mathcal{A}\rightarrow [0, 1]$. Based on the joint policy $\boldsymbol{\pi} \equiv [\pi_i]^n_{i=1}$, the joint action-value function is defined as $Q_{\boldsymbol{\pi}}(s, \mathbf{a}) = \mathbb{E}_{\boldsymbol{\pi}} \left[ \sum^\infty_{k=0} \gamma^{k} r^{(t+k)}| s^{(t)} = s, \mathbf{a}^{(t)} = \mathbf{a} \right]$. The objective is to find a joint policy that maximizes the action-value function.
16
+
17
+ We consider the partially observable setting, where each agent $i$ does not observe the global state $s$, instead only has access to a local observation $o_i \in \Omega$ drawn from the observation function $O(s, i): \mathcal{S}\times \mathcal{N} \rightarrow \Omega$. Hence each agent $i$ maintains its action-observation history $\tau_i \in T \equiv (\Omega \times \mathcal{A})^* \times \Omega$, on which it can condition its policy $\pi_i(a_i|\tau_i): T \times \mathcal{A}\rightarrow [0, 1]$. With agent $i$ observing the next observation $o_i'$, the updated next history is represented by $\tau_i' = \tau_i \cup \{a_i, o_i' \}$. We denote the joint history by $\boldsymbol{\tau}\equiv [\tau]^n_{i=1} \in \mathbf{T} \equiv T^n$, and similarly joint next history by $\boldsymbol{\tau}' \equiv [\tau']^n_{i=1}$.
18
+
19
+ UCT (Upper Confidence bounds applied to Trees) [@kocsis2006bandit] is a tree search algorithm commonly used in Monte-Carlo Tree Search for perfect-information games. In UCT, node selection is treated as a multi-armed bandit problem, where at each node its children nodes correspond to the arms, and the Upper Confidence Bound (UCB) bandit algorithm [@auer2002finite] is used to select the child node with the highest upper confidence. In particular, consider a sequence of node selections from the root to a leaf of a search tree as a trajectory at one timestep, at each depth the child node $i$ with the highest upper confidence bound is selected: $$\begin{equation}
20
+ \label{eq:uct-bandit-act}
21
+ B_i = X_i + c \sqrt{\frac{2 \log(p)}{n_i}},
22
+ \end{equation}$$ where $X_i$ is the empirical mean of the rewards that have been obtained by trajectories going through node $i$, $c$ is a constant controlling the scale of exploration, $n_i$ and $p$ are the number of times node $i$ and its parent node have been visited, respectively. Intuitively, conditioned on previously taken actions in the trajectory, at the current node actions that have been taken fewer times will have a higher exploration bonus, hence UCT tends to take action combinations that are under-explored or promising actions with higher reward estimates. When the trajectory is completed, a reward is received at the leaf. The visitation count and reward estimate of each selected node are updated accordingly. The original paper provides a regret analysis of the UCT algorithm, proving that its expected regret is upper bounded by $O(\log t)$, where $t$ is the number of trajectories/timesteps.
23
+
24
+ # Method
25
+
26
+ We first briefly describe the value decomposition learning paradigm [@sunehag2017value; @rashid2018qmix]. We then present how we utilize conditional counts on value decomposition to drive optimistic exploration.
27
+
28
+ Each agent $i$ has an independent Q-network $Q_{i}^{idp} (\tau_i, a_i; \phi_i)$ parameterized by $\phi_i$. It is important to note that the superscript $idp$ indicates that the $Q_{i}$ is *independent* of other agents' actions, as opposed to a $Q_{i}^{dep}$ that is *dependent* on predecessors' actions if action computation follows a sequential order. The same naming rule also applies to joint Q-values. A mixing network $\text{Mixer}(\cdot ; \theta)$ parameterized by $\theta$ is used to compute the joint Q-values from all individual Q-values: $$\begin{equation}
29
+ \label{eq:q-joint-ind}
30
+ Q_{joint}^{idp}(\boldsymbol{\tau}, \mathbf{a}) = \text{Mixer}\left( \left[ Q_{i}^{idp} \left(\tau_i, a_i \right) \right]^n_{i=1}, s; \theta \right).
31
+ \end{equation}$$ Individual agent's action-value networks $Q_{i}^{idp}$ and the mixing network $\text{Mixer}$ are trained by minimizing the mean-squared temporal-difference error: $$\begin{equation}
32
+ %\label{eq:td-err-joint}
33
+ \mathcal{L}^{idp} \left( \left[\phi \right]^n_{i=1}, \theta \right) = \mathbb{E}_\mathcal{D}\left[ \left(Q_{joint}^{idp} \left(\boldsymbol{\tau}, \mathbf{a}\right) - y^{idp} \right)^2 \right] \label{eq:td-err-ind}
34
+ \end{equation}$$ where $y^{idp} = \left(r + \gamma \max_{\mathbf{a}'} \left(Q_{joint}^{idp} \left(\boldsymbol{\tau}', \mathbf{a}' \right) \right) \right)$ is the update target, and $\mathcal{D}$ is the replay buffer containing trajectory data collected by all agents. It is worth noting that by IGM principle, the greedy actions selected by $Q_{i}^{idp}$'s are the same actions $Q_{joint}^{idp}$ would have taken. As centralized training backpropagates the global reward signal to learn the individual utilities $Q_{i}^{idp}$'s, value factorization implements an implicit multi-agent credit assignment that enables each agent to grasp the inter-dependency among all utilities.
35
+
36
+ Building on top of the value decomposition skeleton, we incorporate count-based optimism in both action computation and learning during the centralized training (CT) phase. For action computation, each agent $i$ selects greedy actions with respect to its conditional optimistic action-value $$\begin{equation}
37
+ \label{eq:ucb-act}
38
+ a_i = \arg\max_{a_i'} \left\{ Q_{i}^{idp} \left(\tau_i, a_i' \right) + c_\text{act}\sqrt{\frac{2 \log \left(N \left(s, a_{<i} \right)\right) }{N \left(s, a_{<i}, a_i' \right) }} \right\},
39
+ \end{equation}$$ where $c_\text{act}\in \mathbb{R}_+$ is a hyper-parameter controlling the scale of optimism, $N(\cdot)$ denotes the visitation count. Note that counting is performed in the global state space, thanks to centralized training. The learning framework of COE is illustrated in [2](#fig:coe-framework){reference-type="ref+label" reference="fig:coe-framework"}.
40
+
41
+ Moreover, we augment the global reward and the bootstrapped target each with a bonus term, such that the update target becomes $$\begin{multline}
42
+ \label{eq:ucb-target-joint}
43
+ y^{idp} = \left( r \left(s, \mathbf{a}\right) + \frac{c_\text{rew}}{\sqrt{N \left(s, \mathbf{a}\right)}} \right) + \gamma \max_{\mathbf{a}'} \Biggr[ \\
44
+ \text{Mixer}\left( \left[ Q_{i}^{idp} \left(\tau_i', a_i' \right) + \frac{c_\text{boot}}{\sqrt{N \left(s', a_{<i}', a_i' \right)}} \right]^n_{i=1} \right) \Biggr],
45
+ \end{multline}$$ where $c_\text{rew}, c_\text{boot}\in \mathbb{R}_+$ are hyper-parameters controlling the scale of the optimistic bias in reward and bootstrapped target, respectively. These two bonus terms are added for two major reasons. First, we intend to maintain long-term optimism in the Q-functions. The acting-time optimism decreases as the corresponding count is incremented, but unlike bandit or tabular MDP methods, COE's Q-value estimate is updated at a relatively slower rate due to the nature of gradient updates of neural networks. To encourage COE to explore persistently, the augmentation to the bootstrap target allows the Q-value itself to encode optimism through TD loss update. Second, since the bootstrap target is defined based on the Q-value estimates of the next state-actions, this optimistic bootstrap target also captures uncertainty from subsequent agents and future timesteps. The idea of learning optimistic Q-values originates from theoretical works such as @jin2018q [@jin2020provably; @yang2020function], and has been extended to deep RL recently (e.g., @rashid2020optimistic).
46
+
47
+ With the count-based optimism introduced, the complete learning algorithm is presented in [\[alg:uct-qlearning\]](#alg:uct-qlearning){reference-type="ref+label" reference="alg:uct-qlearning"}. During decentralized execution, the optimistic bonuses, although may have decayed to negligible magnitude, are removed, and agents take independent actions according to $Q_{i}^{idp}$'s only.
48
+
49
+ To apply COE to deep MARL tasks, we need to approximate counts in high-dimensional or continuous state space. In our experiments, we use the SimHash method [@tang2017exploration] that projects states to a lower-dimensional feature space before counting. We record the visitation count for the tuple of the state $s$ and all agents' joint action $\mathbf{a}$, denoted by $N(s, \mathbf{a})$. For each agent $i$, the count up to its action $a_i$ satisfies $N(s, a_{<i}, a_i) = \sum_{a_{i+1}} N(s, a_{<i}, a_i, a_{i+1}) = \sum_{a_{>i}} N(s, a_{<i}, a_i, a_{>i})$, where $a_{<i}$ and $a_{>i}$ denote the joint actions computed by preceding and subsequent agents of $i$, respectively. This relationship shows that we can obtain any count up to $a_i$ by summing up the counts of joint actions that share the same $a_{<i}$ at state $s$. Details about SimHash counting are presented in [\[apx:pseudo-count\]](#apx:pseudo-count){reference-type="ref+label" reference="apx:pseudo-count"}.
50
+
51
+ :::: algorithm
52
+ ::: algorithmic
53
+ Initialize parameters $\boldsymbol{\phi}, \theta$ Visitation count $N(s,\mathbf{a}) \leftarrow 0, \forall (s, \mathbf{a}) \in \mathcal{S}\times \boldsymbol{\mathcal{A}}$ Replay buffer $\mathcal{D}\leftarrow \{\}$ Compute action $a^{(t)}_i$ according to [\[eq:ucb-act\]](#eq:ucb-act){reference-type="ref+label" reference="eq:ucb-act"} $s^{(t+1)} \sim P(s'|s^{(t)},\mathbf{a}^{(t)}), r^{(t)} = r(s^{(t)},\mathbf{a}^{(t)})$ $N(s^{(t)}, \mathbf{a}^{(t)}) \leftarrow N(s^{(t)}, \mathbf{a}^{(t)}) + 1$ $\mathcal{D}\leftarrow \mathcal{D}\cup \left\{ \left(s^{(t)}, \mathbf{a}^{(t)}, r^{(t)}, s^{(t+1)} \right) \right\}$ Perform a gradient update on [\[eq:td-err-ind\]](#eq:td-err-ind){reference-type="ref+label" reference="eq:td-err-ind"}
54
+ :::
55
+ ::::
2303.15493/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2022-02-26T11:34:48.354Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/13.0.3 Chrome/80.0.3987.163 Electron/8.2.1 Safari/537.36" etag="oEjotLimfmdGk9Bi1Lu3" version="13.0.3" type="device"><diagram id="AkcGM7fYatZbJHBGWld7" name="第 1 页">7Z1bc6M2FIB/jR+X4SIJ9Bg7yXam3e5Os9O9vOywNrHpYsvFJHH66yvMxRiJWGAknFiks8UCH9v6jo6Ojo7EyJkst+9jf734QGZBNLLN2XbkXI9sG0CT/psWPGcFtgeygnkczrIia19wF/4X5IX5++YP4SzYHNyYEBIl4fqwcEpWq2CaHJT5cUyeDm+7J9Hhp679ecAU3E39iC39Es6SRVbq2e6+/LcgnC+KT7YQzq4s/eLmXMRm4c/IU1a0+3HOzciZxIQk2dlyOwmitO6Keslq4LbhavnF4mCViLzh783v3z6uJ+Tm4/TrFt4T/8vij3f5z3j0o4f8B+dfNnkuauBpESbB3dqfpq+fKOSRM14ky4i+suipv1ln9X4fbgP6UeOYPKxm6dnu8iaJya9gQiIS78Q5joOQaaZvjKc5bljeV1SxnZYU9ZWKmcf+LKQ/tBC0Iiv6zvF9GEUV2bd2+kfL2brJq+sxiJNgWynK6+p9QJZBEj/TW/KrDsKGDbN35boLPMfIS572uuDkmrqoqEFR5ufaNy/F7wHRk5xRC17ecV77+k9r+Qi9KVmG0/zWQ1R5DR9ysdg6H+Mb8wo2Iyoxmj1x8ThcgAFdBgxmuWBJWPBxLMFqdpXao33NzPzNomwnFSZMkzF3BxdGY4UGswO7xlZnpaIgR4OLsjiI/CR8PLSGvPrLP+ETCek32dNyTYYWLltRIWdDHuJpkL+1asJq0oApIi3x43mQMNJ2YMsK6M666Jk0bAHY0HR6Q83Ikg3a0qAbQLM2GFp2N9Bce26rBW1r0E0tukfQrCzloJ3joHt2dxFynFfq7kIEh3Z3LXAcmGJ/9/Z2DF1Tob/L6VQ9HhiF7q4FtcF8wQUCuDy8Q3AuPMn1bSNYtilFWgU6qYAnRwHqYmXjFwgcXSj+nT9rlod1yAmDbviPiEVALX6BONSF4nfl4H9ZrGr8AvGunv1oeI3xNXgNfrTHjHJsa2g/2haIWSn2oyeTW3qo9KM9o8bF4XHBRdmBI133rvojo4NMTSaPMwGzJ9bagGIRaZLtZsFWwxaBDaz+UNdlyQYtEGi6UNAe2z9CsyNojizbVAtaIEB1oaDdHkFzZKkG3S7kNY38zSb1kLrgfdEVPRO8oJgGLTtTZABzH57AbjfUADFy1XIWiGv1PkWQHm2HNkxYOnen78neld6l26RHPy40QLjuQltW2TAF9a53D9rRfW1TW3IZXqizA80RZqn2nx3d2zbQgWaPqDnC1KPWU0yNPSRDx+6O2hIQJhu1nkoSRw37RM0Kk+1dqc83bh84Nl+IEMv0rtgO9gwCx+eXcKw6cAw8bLjnGDnWSceNrqrZHOpvbTZtAWGye0idcizeQ3YNG3P6R8VRY0fPBTVhxpzhTTfMHFGWYsy2xtzAxusPMytKNeZ2KVKXFzHGnuGalaNrYKoeMaZyba8SiVYbQHbapUZdHHbomUZPcwOpqHqOs2y66jOfXvP0gMtbMKt0dgBon0o4ygttpxzp9hAzdllpklsn0J6V+DjJOwE2O1TiSJMNW4eNxeMf+ATYbACEI002bJ1wLjyeTc243dWZZkfHLitNNmyBad7BdyUZapaANeR48F1JgMBk7RvfloQTb8Bg2FWaQK/REvaGoNndZrLeEE+abJupZ4OEI4EndZBsOx+gg9TzQcJZaqcMajlT8gMManV+o7gZt/oc1PKkSYYNBSIYOr5YAoK8jTCUxhehbp3CxhOZsEdTnMJX3Tp1yEncFOPusDmmmCNNMmyknSxx2B7sc/jEkSYbtkC/e6GwmQFPasb7Gz6lZlw17HZ99s+ITH99XoQrHuSqg1Tzie5h+veiy0U/5TaMColUA+Lnr6MsJrV78S19YcDi5fW2evH6eXQkfJXV40s1Ac5K12ARoawsKuy4fyjER0XJ1jKRxUpRFK43wfGAKOPUNzIX96eZKqosp67wthAvr3lf2rtLjXjRZBQl+ejioAbRvw+kuPBusxu9XNEbLLDe7i/Ss3n2/3EhiH6xTFZxpQaGVmHykk3nbCSTF/lROKeW4npKwQS0fJwCCad+dJVfWIazWdSE/DBK3neQGmLbKNLSyig1L5XdskpvqwodMD5Yf9DbudZSbXI3K3xoy9+uTe66spuaG6OS7WfVNrCDlnIHv938iFa5oVQOeh33jTimcsioJ53KVjleGkPLrs3md20g/Q9O1uEI3vg/Ru74zx+bkXudX7jsbq+eNwpNrqPD7/QsIM3TcXkhBhnqYGs1oNiBaRTj/NLl5T45RL374/LSVGVogqU1gTUIAKKzcIJdgSjUpc7+ILfGzB1+6xlXJ100eF6oMGjsMLOtE+faR0VJ9tq8dvHCN7+kB5mIyXVDnUPCCLLSoOGBiuOuNnDnDbEFWNunhAyUforqMUO+FVaaferqTUWaeXn1GO/gW4oUDwDV2d0CTg4+g+ald+Jq8kzqK5q9E5yco6Ik93quzm0RxAzZzfC6YuaIko1Z56s1+aG1rhJ231WP6XWVD1lckcnni8TMtObumAUMg2zM57c929U4/RspW3jFjEhwXgXDLLry9H4DTa3FZIxix2ZXF+TWBMmOD9gacUO/hxnb2rEDZVKT1CKG6kMKryYExE/hHHiQCs+vI1T9oFiA6/s0VJ4KOkxnCLUX2pSZwmyXc8riGM5WPgMsjtFxIvH1EqjXxTEcabJh65nPJtjM5jupGe5v8x2eNNlZa3pM0wSb2S7npMUx7FY+6hfHeHrXkBbh/q6oGVkuK0s2aL2lVmPySn8Bf3PoEGERitaYj2I2O0YrmDhwXZBsxNoZE0ScmuziIWInT+pwZEkGjXVbbgGnc0JaXZb6vhlrj1t8FqHM4D856YIjqz/Q9s+HqxkKyc/Pn6wv5K/Jd/jrn3enp/3bHi/t3zCMV5ziP08zaPPzaiCWUAlhktLC5k5CQlWJrIrXjbrbJrht1x/2BB1UFlV0HHB03G3WZtE4KldLbK0lZ6UljmvXn0d9BlrCy+zJ4G7W/kpESyyTpyUTslyTDUVD77hbhPcJxUHPaA1ugilZPVZUKPugN6dCPagMgnY9tuNwVyF6nCVH9WedC6gMfRmTFOO+t6K1sfhAZkF6x/8=</diagram></mxfile>
2303.15493/main_diagram/main_diagram.pdf ADDED
Binary file (35.3 kB). View file
 
2303.15493/paper_text/intro_method.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ 3D deep learning on point clouds [6, 12, 25, 27] has been widely adopted in a wide variety of downstream applications including autonomous driving, AR/VR and robotics due to its strong discriminative power and generalization ability. In these applications, real-time interaction and fast
4
+
5
+ <span id="page-0-0"></span>![](_page_0_Picture_8.jpeg)
6
+
7
+ Figure 1. Demonstration of sparse convolution and the proposed shifted sparse convolution. (a) Sparse convolution only operates when the center of kernel slides over the active sites. (b) Our shifted sparse convolution performs different operations for each group of output channels, which brings more information from the neighbor active sites.
8
+
9
+ response are required to guarantee safety and practicality.
10
+
11
+ Submanifold sparse convolution (we call it "sparse convolution" for short in the rest of this paper) [12] is one of the most popular and basic operator for point cloud analysis, which first voxelizes the point clouds and then applies 3D convolution on the voxels while keeping the same sparsity pattern throughout the layers of the network. Sparse convolution is widely adopted in most state-of-the-art architectures for point cloud analysis and so it is desirable to further improve its efficiency for more practical application. We opt for architecture-agnostic methods such as employing network binarization to achieve this goal. Binarized neural networks [19,36] restrict the bitwidth of weights and activations to only one bit and substitute the multiplication-addition by xnor-bitcount operations, which decreases the
12
+
13
+ <sup>\*</sup>Corresponding author.
14
+
15
+ <span id="page-1-0"></span>storage and computational cost by 32× and 64× respectively. We empirically find sparse convolution operation brings larger quantization errors compared to standard convolution, which leads to significant performance degradation when directly applying existing network binarization methods due to the large quantization errors.
16
+
17
+ In this paper, we present BSC-Net to learn binary sparse convolutional networks for efficient point cloud analysis in resource-exhaustive scenarios. Instead of directly binarizing the weights and activations in sparse convolutional networks, we search the optimal subset of convolution operation that activates the sparse convolution at various locations for binarization. The acquired convolution patterns significantly reduces the quantization errors in deployment, and achieves remarkable performance enhancement without extra computational cost. More specifically, we propose the shifted sparse convolutional networks whose convolution operations are activated for active sites consistent with the pre-defied locations, and the optimal positions for active site matching across various channels are obtained via differentiable search strategies. Therefore, the quantization errors in the fixed convoltion operations are significantly alleviated by leveraging the shifted sparse convolution with the searched active site matching locations. Moreover, we empirically select the recently advances that are beneficial for sparse convolution network binarization to construct a strong baseline. Extensive experimental results on Scan-Net and NYU Depth v2 for semantic segmentation of point clouds show that our BSC-Net reduces the operations per second (OPs) by 92.4% with only 3% mIOU degradation.
18
+
19
+ # Method
20
+
21
+ In this section, we first briefly introduce the preliminary concept of sparse convolution and network binarization. Then we conduct experiments to show the quantization errors of network binarization methods in different convolution patterns, and introduce the shifted sparse convolution (SFSC) operation which is activated for sites in various locations of the receptive field. Finally, we demonstrate the differentiable search to discover the optimal position for active site matching in SFSC, and construct the BSC-Net with alleviated quantization errors and enhanced performance.
22
+
23
+ Let $\mathbf{x_u}$ be an input feature vector of an active site, located at 3-dimensional coordinates $\mathbf{u} \in \mathbb{R}^D$ . As shown in Figure 1(a), the general sparse convolution [6, 12] $F_0$ by a kernel for $\mathbf{x_u}$ is formulated as:
24
+
25
+ $$F_0(\boldsymbol{W}, \mathbf{x_u}) = \sum_{\mathbf{i} \in N^D(\mathbf{u})} \boldsymbol{W_i} \mathbf{x_{u+i}}$$
26
+ (1)
27
+
28
+ where $N^D(\mathbf{u})$ denotes the list of offsets in the 3-dimensional cube centered at origin $\mathbf{u}$ . The convolution kernel can be break down and assigned to each offset parameterized by $\mathbf{W_i}$ .
29
+
30
+ Sparse convolution is a practical substitution for vanilla 3D convolution, and skips the non-active regions that only operates when the center of convolutional kernel covers active voxels. Specifically, active voxels are stored as sparse tensors for the fixed convolution operations, where all active synapes between input and output voxels are found to perform convolution. Therefore, the memory requirement and computational cost are significantly reduced in sparse convolutional networks. To further reduce the complexity during inference, network binarization can be leveraged for
31
+
32
+ <span id="page-2-0"></span>![](_page_2_Figure_8.jpeg)
33
+
34
+ Figure 2. Sign correspondence of activations for the first binary layer when binarizing convolutional network, sparse convolutional network and shifted sparse convolutional network for point cloud segmentation on ScanNet dataset. All networks share the same kernel weights. We sort x-axis (different patterns of sparse convolution) by their sign correspondence for better visualization.
35
+
36
+ weight and activation quantization. In a 1-bit sparse convolutional layer, both convolutional kernels and activations are binarized to -1 and +1. In this way, the time-consuming floating-point matrix multiplication can be replaced by bitwise XNOR and popcount operations:
37
+
38
+ $$A_b^l = sign(popcount(XNOR(W_b^l, A_b^{l-1})))$$
39
+ (2)
40
+
41
+ where $A_b^l$ and $W_b^l$ represent the binarized activations and weights in the $l_{th}$ layer respectively, and $W_b^l$ is defined as the binarzed version of the real-valued latent weights $W_r^l$ via $W_b^l = sign(W_r^l)$ .
42
+
43
+ Since the fixed operation in sparse convolution is only activated when the central input in the receptive field is active, the constrained exploration of the neighbor active sites makes sparse convolutional networks less robust to binarization. To show this, we calculate the sign correspondence (the proportion of activations in binary network that own same signs with the corresponding real-valued activations, which can measure the quantization error as proved in [29]) for convolutional network and sparse convolutional network with inputs from the ScanNet dataset. We choose the activations of the first binary layer to avoid the accumulation of quantization errors and adopt the same kernel weights for both networks. As shown in Figure 2, the sign correspondences for convolutional layer and sparse convolutional layer are 63.1% and 58.4% respectively, which confirms that sparse convolution will bring larger quantization errors than standard convolution.
44
+
45
+ However, it is infeasible to adopt convolutional layers in point cloud analysis networks for reducing quantization errors due to the large computational cost from growing active sites. As an alternative, we try to explore the subset of convolution. For a single active site, a $3 \times 3 \times 3$ convolution kernel will operate 27 times while sparse convolution kernel only operates at the center. What if we keep the same number of operations with sparse convolution but operates at
46
+
47
+ <span id="page-3-3"></span>other location? To answer it, we extend sparse convolution to enable it to active at different locations. Here we propose the shifted sparse convolution(SFSC) shown in Figure 1(b), which is defined as:
48
+
49
+ $$F_k(\mathbf{W}, \mathbf{x_u}) = \sum_{\mathbf{i} \in N^D(\mathbf{u} + s_k)} \mathbf{W_i} \mathbf{x_{u+i}}$$
50
+
51
+ $$s_k \in \mathbb{R}^3, \ k \in \{1, 2, ..., n_s\}$$
52
+ (3)
53
+
54
+ where $\mathbf{u} + s_k$ is the center of shifted cube instead of $\mathbf{u}$ . $N^D(\mathbf{u} + s_k)$ is then comprised of the offsets in the shifted cube w.r.t. $\mathbf{u}$ . $n_s$ is the number of all unique shifts. For example, for a $3 \times 3 \times 3$ sparse convolution operation, there are up to $3^3 - 1 = 26$ possible shifts.
55
+
56
+ For a general sparse convolution operation, it conducts convolution only when the kernel center overlaps with active sites. While in our SFSC operation, the kernel center can shift to any other locations of the kernel. We use $F_{n_s} = \{F_0, F_1, F_2, ..., F_{n_s}\}$ to represent the set of all SFSC operations. Note that we consider the general sparse convolution as a special case of SFSC $(F_0)$ . In a SFSC layer, instead of applying the same sparse convolution operation for all output channels as in a general sparse convolutional layer, we uniformly divide the output channels into several groups (namely channel group), each with a specific SFSC operation. It can be formulated as:
57
+
58
+ $$y = \text{concat}(f_1(\mathbf{W}_1, x), ..., f_{n_a}(\mathbf{W}_{n_a}, x)), f_i \in \mathbf{F}_{n_a}$$
59
+ (4)
60
+
61
+ where x and y are the input and output of this layer. $n_g$ indicates the number of channel groups. $W_i$ refers to the weights for the i-th SFSC operation. The outputs of all SFSC operations are concatenated along the channel dimension, resulting in a tensor with the same shape as the output of a general sparse convolutional layer.
62
+
63
+ We randomly sample 50 shift configurations for SFSC layers and compute the sign correspondence, which is shown in Figure 2. It can be seen that different SFSC layers vary a lot in quantization errors and a proportion of them are more robust to binarization compared to sparse convolutional layer. In another word, if we can find out the (near) optimal configurations for all SFSC layers in a network, the quantization error can be reduced without additional computational cost.
64
+
65
+ Due to the huge design space of shift operation, it is infeasible to decide an optimal configuration for the whole network: the shifted channels and shift directions may be different in each layer, and the total number of possible architectures will be $(8^4)^{13} = 9.1 \times 10^{46}$ for a network with 13 SFSC layers, each layer with 4 channel groups and 8 available shift directions. Although manually designed BSC-Net, which shares the same shift strategy in all SFSC layers, is able to reduce the impact of binarization on the
66
+
67
+ <span id="page-3-1"></span>![](_page_3_Figure_9.jpeg)
68
+
69
+ Channel Groups
70
+
71
+ Efficient Search for Optimal Shifts
72
+
73
+ Figure 3. Demonstration of our efficient search method for shift operation. For each SFSC layer and each channel group, we combine all the shift operations in the search space into a $5 \times 5 \times 5$ sparse convolution and assign each direction with a soft selector indicating the importance of the corresponding shift operation, which enables us to directly search the best shift operations via end-to-end gradient descent. $\oplus$ stand for summation.
74
+
75
+ network performance, we resort to automatic architecture search for a better performance. In this section, without further explanation, the default kernel size for original sparse convolution and SFSC is $3\times3\times3$ .
76
+
77
+ In our BSC-Net, the optimal shift direction for each channel group and each layer may differ. Thus the problem is to search the optimal shift direction for each channel group in the SFSC layer. We formulate this by searching the optimal $f_i$ in (4.4):
78
+
79
+ <span id="page-3-2"></span>
80
+ $$f_{i} = \sum_{j=1}^{n_{s}} o_{ij}^{a} F_{j}, \ i \in \{1, 2, ..., n_{g}\}$$
81
+ s.t.
82
+ $$\sum_{j} o_{ij}^{a} = 1, \ o^{a} \in \{0, 1\}.$$
83
+ (5)
84
+
85
+ where $o^a$ is a binary selector of the shift direction. As searching in a discrete space makes it hard to optimize the choices, we reformulate the discrete search space as a continuous one by switching $f_i$ to a composite function $f_i^*$ :
86
+
87
+ <span id="page-3-0"></span>
88
+ $$f_i^* = \sum_{j=1}^{n_s} \pi_{ij}^a F_j, \ i \in \{1, 2, ..., n_g\}$$
89
+ s.t. $\pi^a \in [0, 1], \ \pi_{ij}^a = \frac{1}{1 + \exp(-\alpha_{ij})}$
90
+
91
+ where the constraints on weight $\pi^a$ are eliminated by introducing a set of real architecture parameters $\{\alpha_{ij}\}$ . This sigmoid relaxation [7] will not introduce competition among different SFSC operations as in softmax relaxation [3], which we find to be a better way to search for BSC-Net. In this way, the composition of SFSC operations are learned by gradient descent in the space of continuous real parameters $\{\alpha_{ij}\}$ , which can be optimized end-to-end efficiently.
92
+
93
+ <span id="page-4-2"></span>However, according to (6), the computation and memory increase linearly with the size of search space. All available SFSC operations need to be conducted in weighted summation $f_i^* = \sum_{j=1}^{n_s} \pi_{ij}^a F_j$ . Moreover, each SFSC layer owns different parameters, increasing the difficulty of network optimization. To this end, we propose an efficient search method, which absorb all the operations in search space into a larger sparse convolution, as shown in Figure 3.
94
+
95
+ In this way, we convert the SFSC layer into a $5\times5\times5$ composite sparse convolutional layer, which is used to construct a supernet. This enables us to efficiently search the optimal architecture parameters by end-to-end optimization, regardless of the search space. However, it should be clarified that although the size of search space will not affect the computational efficiency of the supernet, a large search space will make the optimization of architecture parameters hard to converge, thus deteriorate the final performance.
96
+
97
+ Once the supernet is converged, the optimal BSC-Net must be derived by discretizing the soft selector variables $\pi^a$ of (6) into the binary selectors $o^a$ required by (5). In order to make sure the performance of supernet can precisely reflect the capability of BSC-Net, we constrain $\pi^a$ in each SFSC layer by a confidence loss:
98
+
99
+ <span id="page-4-1"></span>
100
+ $$L_c = -\frac{1}{n_g \cdot n_s} \sum_{i}^{n_g} \sum_{j}^{n_s} |\pi_{ij} - 0.5|$$
101
+ (7)
102
+
103
+ which pushes $\pi^a$ to discrete values.
104
+
105
+ **Optimization approach:** In order to decouple the weights and architecture parameters for robust learning [3], we adopt an alternating optimization approach: 1) fix the $\{\alpha_{ij}\}$ and optimize $\{W_i\}$ ; 2) fix $\{W_i\}$ and update $\{\alpha_{ij}\}$ .
106
+
107
+ When we derive the BSC-Net from a converged supernet, both weights and architecture parameters need to be considered. Here we find the following strategy works best: we first train the supernet with binary weight and activation to search for the optimal architecture parameters, from which we choose the shift directions with the highest architecture parameters. Then we initialize the searched BSC-Net with the weights from the supernet and follow the same training procedure as our baseline (introduced in Section 4).
2304.06306/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2304.06306/paper_text/intro_method.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recent years have witnessed the great success of largescale pretrained language models [\[8,](#page-8-0)[31,](#page-9-0)[32\]](#page-9-1) and visual models [\[6](#page-8-1)[,10,](#page-8-2)[23,](#page-8-3)[39\]](#page-9-2), leading to a surge of pretrained multimodal models [\[13,](#page-8-4) [14,](#page-8-5) [43,](#page-9-3) [47,](#page-9-4) [48\]](#page-9-5) trying to align different modalities. Many prior methods utilize finetuning to update the entire set of model parameters for every target cross-modal task. Although finetuning can achieve good performance, it requires a large number of computational costs since the gradients and optimizer states for all parameters of multimodal models have to store. Therefore, it encourages researchers to propose more parameter-efficient methods than finetuning for multimodal learning.
4
+
5
+ <span id="page-0-0"></span>![](_page_0_Figure_8.jpeg)
6
+
7
+ Figure 1. Comparison over three multimodal classification tasks. We compare our proposed PMF and PMF-Large with multiple finetuning (yellow) and prompt-based (purple) methods. The y-axis is the average score of three tasks, and the x-axis is the maximum GPU memory usage during training.
8
+
9
+ More recently, prompting tuning [\[17,](#page-8-6) [19,](#page-8-7) [21,](#page-8-8) [22,](#page-8-9) [29\]](#page-9-6) is proposed to address this problem by freezing all parameters of a pretrained model while tuning only the continuous prompts. Specifically, it adds trainable continuous prompts to the original token sequences of input data. During training, only the continuous prompts are updated. For multimodal prompt-based learning, a most recent method [\[20\]](#page-8-10) proposes to disentangle the functionality of the pretrained model which exhibits high flexibility. Although this method significantly reduces the tuned parameters (*e.g*., less than 0.1% of the pretrained model), there still exists a large performance gap between it and the finetuning-based methods. In addition, this method adopts a sequential modular structure that the pretrained image transformer model is followed by a language transformer model, which causes two main problems in cross-modal learning: a one-way path learn<span id="page-1-0"></span>ing and a significant increase in the number of model layers. Specifically, a one-way path learning in the multimodal model usually forces one modality to align with others, but not vice versa. In this way, cross-modal learning based on multiple different modalities is not fully explored due to the missing mutual alignments. Since the prompts are added to the token sequences of input data and are updated in the training, they require extensive gradient calculations in the backward propagation which cost numerous memory usages. As a result, this kind of method does not reduce the memory usage during training by much (up to 20%) though it reduces the number of parameters to update. In other words, this parameter-efficient method still requires massive computational resources which prevents it from being applied to many real-world applications.
10
+
11
+ To address these issues, we propose a Prompt-based Multimodal Fusion method with a high memory efficiency, namely PMF. Firstly, we present a new form of modular multimodal fusion framework which demonstrates high flexibility and facilitates a two-way interaction among different modalities. Specifically, we adopt a two-stream structure where the pretrained language model and image model construct the multimodal model in a parallel way. Therefore, tokens of different modalities can learn mutual interactions through a cross-attention-like operation. Such a parallel modular structure brings two benefits. First, unimodal pretraining can be directly utilized for multimodal learning through a parallel combination, eliminating the need for paired multimodal datasets that can be expensive to construct. Also, the type of image or language model can be changed easily (*e.g*., replacing BERT with T5 for text generation tasks). Furthermore, incorporating extra modalities is made possible based on the parallel modular structure.
12
+
13
+ Moreover, we propose to leverage three types of interactive prompts (*i.e*., query prompts, query context prompts, and fusion context prompts) in order to dynamically learn different objectives for multimodal learning. Intuitively, the query context prompt and query prompt can be seen as a pair of 'questions' and 'answers' with an aim of extracting necessary information for exchange between two modalities. After being translated by a non-linear mapping 'translator', the 'answer' is then delivered to the other modality for better cross-modal understanding. Finally, the fusion context prompts then provide the context to the delivered answer to facilitate the fusion.
14
+
15
+ Last but most importantly, PMF is a memory-efficient method that significantly reduces the memory requirements for the large pretrained model. Considering that calculating gradients for prompts for back-propagation is memoryconsuming, we propose to add prompts only on the deep layers of the utilized unimodal transformers. Therefore, instead of passing through the entire multimodal model, the backward propagation only needs to pass through the deep few transformer layers to reach all trainable parameters, greatly reducing the training memory usage. We conduct extensive experiments to demonstrate the superior of it in our experiments. As a result, PMF enables large pretrained models to be trained on the GPU with a low memory requirement.
16
+
17
+ We conduct extensive experiments on three visionlanguage datasets: UPMC-Food101 [\[38\]](#page-9-7), MM-IMDB [\[2\]](#page-8-11), and SNLI-VE [\[41\]](#page-9-8). Through comparisons with multiple finetuning and prompt tuning methods (see in Fig. [1\)](#page-0-0), we find that: (1) PMF is the most memory-efficient method for cross-modal learning so far, which reduces the training memory usage by up to 66% compared with finetuning baselines, and by 55% compared with prompt-based methods. (2) PMF can perform comparably compared to prior fine-tuning methods with much fewer trainable parameters (less than 2.5%) and memory usage.
18
+
19
+ Concretely, our contributions are as follows: (1) we present a new form of modular multimodal fusion framework which enables two-way interactions between different modalities and high flexibility of the entire model; (2) we disentangle vanilla prompts into three types of prompts, in order to dynamically learn different objectives for multimodal learning; (3) our proposed method is quite memoryefficient yet is able to achieve comparable performance with existing finetuning methods for multimodal fusion.
20
+
21
+ # Method
22
+
23
+ We report the performance of several baselines and existing methods. First, we report the performance of finetuning unimodal models (*i.e.* BERT [\[8\]](#page-8-0), ViT [\[9\]](#page-8-24)) to verify the effectiveness of multimodal fusion. Specifically, we take the output representation of CLS token of the last layer in ViT and BERT, and feed it into a linear classifier. We also report the performance of VPT [\[12\]](#page-8-20) and a prompt-based BERT (denoted P-BERT) for a better comparison. For VPT
24
+
25
+ <span id="page-5-2"></span><span id="page-5-1"></span>
26
+
27
+ | Method | Updated Param.<br>(Million) | Memory Usage (GB) | SNLI-VE<br>Food-101 | | | |
28
+ |------------------------|-----------------------------|-------------------|---------------------|-------|---------------|-------|
29
+ | | | Train/Inference | | | MM-IMDB | Avg. |
30
+ | Linear | - | 3.76 / 3.23 | 50.05 | 78.96 | 49.76 / 56.83 | 60.77 |
31
+ | ViT | 86.5 | 9.36 / 1.99 | 33.33 | 74.69 | 38.39 / 49.88 | 50.72 |
32
+ | BERT | 109.0 | 30.82 / 2.79 | 69.82 | 87.44 | 58.91 / 64.31 | 72.96 |
33
+ | LateConcat | 196.0 | 38.54 / 3.36 | 70.01 | 93.29 | 59.56 / 64.92 | 75.18 |
34
+ | MMBT∗ | 196.5 | 37.87 / 3.48 | 74.69 | 94.10 | 60.80 / 66.10 | 77.41 |
35
+ | MBT∗ | 196.0 | 38.00 / 4.06 | 74.02 | 93.56 | 59.60 / 64.81 | 76.60 |
36
+ | VPT | - | 6.12 / 2.01 | 33.33 | 72.55 | 35.22 / 44.49 | 48.58 |
37
+ | P-BERT | - | 28.13 / 2.99 | 63.28 | 81.07 | 48.67 / 54.58 | 65.33 |
38
+ | PromptFuse | - | 29.57 / 3.55 | 64.53 | 82.21 | 48.59 / 54.49 | 66.09 |
39
+ | BlindPrompt | - | 29.57 / 3.65 | 65.54 | 84.56 | 50.18 / 56.46 | 67.81 |
40
+ | P-LateConcat | 0.3 | 30.82 / 3.43 | 63.05 | 89.03 | 53.91 / 59.93 | 69.67 |
41
+ | P-MMBT | 0.9 | 30.90 / 3.48 | 67.58 | 86.58 | 52.95 / 59.30 | 70.10 |
42
+ | PMF (M=4, Lf=10) | 2.5 | 12.84 / 4.08 | 71.92 | 91.51 | 58.77 / 64.51 | 75.02 |
43
+ | PMF-large (M=4, Lf=22) | 4.5 | 18.44 / 6.42 | 72.10 | 91.68 | 61.66 / 66.72 | 75.99 |
44
+
45
+ Table 2. Multimodal classification performance. PMF achieve comparable performance to the finetuning baselines with less than 3% of trainable parameters and up to 66% of training memory usage. MM-IMDB is F1-Macro / F1-Micro, others are accuracy. We report the maximum memory usage in training and evaluating UPMC Food-101 for each method. We report mean performance over 3 runs with different random seeds. '-' means trainable parameter less than 0.1 M. PMF-Large uses bert-large and vit-large models (24 hidden layers) while others use bert-base and vit-base models (12 hidden layers). M is the prompt length and L<sup>f</sup> is the starting fusion layer.
46
+
47
+ and P-BERT, the input sequence to each transformer layer is concatenated with a prompt vector, whose length is set to 10. And the concatenated prompt vectors and the final linear classifier are the only updated modules in training.
48
+
49
+ In addition, we compare against a strong baseline method which concatenates the output features of CLS tokens of ViT and BERT, and feed the concatenated feature to a linear classifier, denoted as LateConcat. In this case, the input to the classifier is (768 + 768)-dimensional. Besides, we also introduce Linear, which shares the same architecture with LateConcat with the only difference in the updated modules. Linear only updates the linear classifier while LateConcat updates all parameters during training.
50
+
51
+ We reimplement MMBT (denoted MMBT<sup>∗</sup> ) [\[15\]](#page-8-13) and MBT (denoted MBT<sup>∗</sup> ) [\[27\]](#page-8-12) with a vit-base model as the vision encoder and a bert-base model as the text encoder for fair and controlled comparison. We set the fusion layer L<sup>f</sup> = 8 and use 4 fusion tokens in MBT as recommended in the original paper.
52
+
53
+ We also propose a prompt-based MMBT and a prompt-based LateConcat, denoted as P-MMBT and P-LateConcat, respectively. In P-MMBT and P-LateConcat, we apply deep prompt tuning on both vision and language encoders, which are pretrained backbones with frozen parameters during training. We set the prompt length in each layer of two encoders to 10, totalling 240 prompt vectors. Similar to VPT and P-BERT, P-LateConcat only updates the final linear classifier and prompt vectors during training. Compared with P-LateConcat, P-MMBT have an extra linear projection layer and a smaller linear classifier to train.
54
+
55
+ Lastly, we report the performance of PromptFuse and BlindPrompt [\[20\]](#page-8-10), both proposed in the only existing paper which leverages unimodally pretrained models for multimodal fusion through prompting. We set the prompt length to 20 as recommended in the original paper.
2305.19926/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2305.19926/paper_text/intro_method.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The recent emergence of Large Language Models (LLMs) represents a significant advancement in the field of artificial intelligence, signifying a noteworthy milestone. Notably, ChatGPT[^2], an exemplary LLM, has demonstrated its capabilities in various tasks such as text translation [@jiao2023chatgpt], sentence revision [@wu2023chatgpt], programming assistance [@surameery2023use], and and complex question answering [@tan2023evaluation]. These achievements serve as prominent benchmarks for evaluating LLM performance. Moreover, LLMs have brought about a paradigm shift in human-computer interaction, fundamentally transforming the manner in which individuals engage with computational systems. Over time, the difficulty associated with computer usage have progressively diminished since their inception. Presently, with the aid of LLMs, computers have evolved into more than mere tools; they assume the role of assistants, fostering a symbiotic relationship with human users. Consequently, the focus of interest lies not only in evaluating the efficacy of LLMs but also in understanding their communicative dynamics with individuals.
4
+
5
+ <figure id="fig:robustness" data-latex-placement="t">
6
+ <embed src="Figures/robustness.pdf" />
7
+ <figcaption>The personality results of ChatGPT against three robustness testing.</figcaption>
8
+ </figure>
9
+
10
+ In this study, we turn to trait theory in psychology to enhance our comprehension of the behaviors exhibited by LLMs. We consider LLMs as distinct individuals and adopt the Myers-Briggs Type Indicator [MBTI, @myers1962myers] test to gauge their traits. It is a popular personality assessment that categorizes individuals based on four dichotomies: Extroversion (E) vs. Introversion (I), Sensing (S) vs. Intuition (N), Thinking (T) vs. Feeling (F), and Judging (J) vs. Perceiving (P). It assigns a four-letter type code representing a person's preferences. First, we assess the ability of ChatGPT to generate consistent outcomes when presented with rephrased prompts/questions and different question orders. This examination is crucial as language models have been shown to be responsive to prompts [@wei2022chain; @white2023prompt] and orders [@zhao2021calibrate]. Subsequently, in order to validate the reliability and ascertain consistency across diverse languages [@cao2023assessing; @arora2022probing], we acquire MBTI results in seven other languages. These languages encompass a wide range of language families/groups, different character sets, and most significantly, diverse cultures. This consideration is vital due to the well-established variability of personality traits across regions [@giorgi2022regional; @rentfrow2015regional; @krug1973personality]. Furthermore, we expand our evaluation to include additional LLMs, namely GPT-4 [@openai2023gpt], Bard[^3], Spark[^4], and ERNIE Bot[^5]. In summary, the findings indicate that certain LLMs yield consistent results: **ChatGPT exhibits an ENFJ personality type, Bard corresponds to an ISTJ type, Spark embodies an ISFP type, and ERNIE Bot aligns with an ISTJ type**.
11
+
12
+ Moreover, our research aims to explore whether LLMs can exhibit personality changes in response to instructions or contextual cues. Initially, we establish a specific personality for ChatGPT based on previous literature regarding the control of LLMs' values [@santurkar2023whose]. Additionally, recent research by @coda2023inducing demonstrates the influence of a sad/happy context on LLMs' anxiety levels. Following this work, we conduct experiments to assess ChatGPT's personality in both sad and happy contexts. Drawing inspiration from @deshpande2023toxicity, who explore the concept of assigning a persona to ChatGPT in order to evaluate its propensity for offensive language and bias, our research instructs ChatGPT to emulate the characteristics of a selected historical figure with the intention of assessing its resulting personality. Our findings indicate that ChatGPT consistently maintains its original personality, specifically identified as ENFJ, irrespective of the provided instructions or contextual variations.
13
+
14
+ Our study answers the following Research Questions (RQs): **RQ1** (Section [2](#sec:rq1){reference-type="ref" reference="sec:rq1"}): Can LLMs consistently yield reliable results? **RQ2** (Section [3.1](#sec:rq2){reference-type="ref" reference="sec:rq2"}): Do personalities differ across different languages? **RQ3** (Section [3.2](#sec:rq3){reference-type="ref" reference="sec:rq3"}): Do LLMs exhibit similar personalities? **RQ4** (Section [4](#sec:rq4){reference-type="ref" reference="sec:rq4"}): Can personalities be influenced by contextual factors? All the raw data produced by LLMs can be found on GitHub[^6].
15
+
16
+ Our primary RQ centers on the fundamental aspect of determining the reliability and consistency of the LLM produced results. In order to address this, we undertake a series of rigorous robustness analyses, encompassing prompt selection (Section [2.1](#sec:prompt){reference-type="ref" reference="sec:prompt"}), question order (Section [2.2](#sec:order){reference-type="ref" reference="sec:order"}), and question rephrase (Section [2.3](#sec:rephrase){reference-type="ref" reference="sec:rephrase"}). We employ a widely recognized questionnaire sourced from `16Personalities`[^7], which boasts a substantial daily usage of over 72,000 completions and a reported global accuracy rate of 91.2%. This questionnaire comprises a total of 60 questions, each of which prompts the LLM to express its level of agreement with a given statement on a 7-point scale. The results span a range of 0 to 100 for each dimension. The threshold is established at 50: A lower value indicates the I/S/F/P traits, while a higher value signifies the E/N/T/J traits. By default, we employ ChatGPT 3.5 on its official website.
17
+
18
+ ::: table*
19
+ :::
20
+
21
+ We instruct ChatGPT to respond exclusively with numerical values in order to restrict the output format. The instructions provided include the task description as well as the meaning of each level. Our prompt is structured as follows: *"You can only reply to me numbers from 1 to 7. Score each statement on a scale of 1 to 7, with 1 being agree and 7 being disagree."* followed by the questions. We provide ChatGPT with multiple questions once a time to improve efficiency. To evaluate the model's robustness with regard to prompt selection, we provide two more designs: 1) We invert the definition of numbers, resulting in 1 representing disagreement and 7 representing agreement. 2) We use alphabet A to G to represent strongly agree, agree, somewhat agree, neutral, somewhat disagree, disagree, and strongly disagree. To observe the model's performance across multiple iterations, we present the results in Figure [1](#fig:robustness){reference-type="ref" reference="fig:robustness"} (a). The figure demonstrates the consistent robustness of the results regardless of prompt selection.
22
+
23
+ One concern regarding feeding questions into ChatGPT in batch is the potential influence of other questions on the response. In order to mitigate the impact of context, specifically, the presence of other questions, we random shuffle the questions before feeding to ChatGPT. This allows us to test ChatGPT with various permutations of questions. The results, as illustrated in Fig. [1](#fig:robustness){reference-type="ref" reference="fig:robustness"} (b), demonstrate the robustness of ChatGPT across different question orders.
24
+
25
+ <figure id="fig:other_llms" data-latex-placement="t">
26
+ <embed src="Figures/other_llms.pdf" />
27
+ <figcaption>The personalities of GPT-4, Bard, Spark, and ERNIE Bot.</figcaption>
28
+ </figure>
29
+
30
+ Given the high likelihood that ChatGPT's training data encompasses the original MBTI questions, there is a possibility that its responses may be influenced by its training data. In line with previous research investigating the performance of ChatGPT [@coda2023inducing; @bubeck2023sparks], we have reformulated the questions to ensure their novelty to the model. To this end, we employ ChatGPT to rephrase the questions, and manually assessed whether there are instances of duplicated sentences and if the rewritten sentences maintained their semantic meaning. As illustrated in Fig. [1](#fig:robustness){reference-type="ref" reference="fig:robustness"} (c), it is evident that different rephrases do not have an impact on ChatGPT's MBTI test outcome.
31
+
32
+ ::: tcolorbox
33
+ **Findings 1:** ChatGPT can produce robust ENFJ results against different prompts, question orders and rephrases.
34
+ :::
35
+
36
+ Given the observed performance disparities among languages in ChatGPT [@jiao2023chatgpt; @lai2023chatgpt], as well as the documented regional variations in personalities [@giorgi2022regional; @rentfrow2015regional; @krug1973personality], we are motivated to conduct an evaluation of ChatGPT's personality across different languages. To assess the cross-lingual alignment of ChatGPT, we conducted tests in seven additional languages: Chinese (Zh), Korean (Ko), Spanish (Es), French (Fr), German (De), Italian (It), and Arabic (Ar). We obtained the 60 questions in the aforementioned seven languages from the `16Personalities` and subsequently translated the prompt in Section [2.1](#sec:prompt){reference-type="ref" reference="sec:prompt"} into those respective languages. Each language was tested multiple times, and the average results are presented in Table [\[tab:multilingual\]](#tab:multilingual){reference-type="ref" reference="tab:multilingual"}.
37
+
38
+ ::: tcolorbox
39
+ **Findings 2:** The personalities of ChatGPT across different languages are consistent, maintaining an ENFJ personality type in line with the English version.
40
+ :::
41
+
42
+ We are intrigued by the possibility of varying personalities among different LLMs, considering potential differences in their training data and instruction tuning. To investigate this, we evaluate the personalities of several publicly available LLMs, namely GPT-4, Bard, Spark, and ERNIE Bot. GPT-4 and Spark were tested using English questions, while Spark and ERNIE Bot were tested using Chinese questions. The findings are presented in Fig. [2](#fig:other_llms){reference-type="ref" reference="fig:other_llms"}. On the one hand, it was observed that GPT-4 shares the same personality, specifically ENFJ, as ChatGPT. However, the responses from GPT-4 indicated a reluctance to provide extreme scores such as 1 (strongly agree) and 7 (strongly disagree). On the other hand, other language models also demonstrated consistent results, with Bard displaying an ISTJ personality, Spark an ISFP personality, and ERNIE Bot an ISTJ personality.
43
+
44
+ ::: tcolorbox
45
+ **Findings 3:** GPT-4 and ChatGPT maintain a consistent personality trait identified as ENFJ. Conversely, Bard, Spark, and ERNIE bot exhibit distinct personalities, specifically ISTJ, ISFP, and ISTJ, respectively.
46
+ :::
47
+
48
+ <figure id="fig:personality" data-latex-placement="t">
49
+ <embed src="Figures/personality.pdf" />
50
+ <figcaption>The personality results of ChatGPT with assigned personalities.</figcaption>
51
+ </figure>
52
+
53
+ <figure id="fig:cot" data-latex-placement="t">
54
+ <embed src="Figures/cot.pdf" />
55
+ <figcaption>The personality results of ChatGPT without/with description of the personalities.</figcaption>
56
+ </figure>
57
+
58
+ <figure id="fig:environment" data-latex-placement="t">
59
+ <embed src="Figures/environment.pdf" />
60
+ <figcaption>The personality results of ChatGPT with positive and negative context.</figcaption>
61
+ </figure>
62
+
63
+ We have identified the intrinsic personality traits of LLMs. Subsequently, our focus shifts from assessing the default personalities of LLMs to examining their contextual steerability. The capacity to exhibit diverse personalities is crucial for LLMs as users may desire distinct stylistic characteristics. To accomplish this objective, we employ several approaches to control the personality of LLMs. Firstly, we explore the direct assignment of a personality to ChatGPT (Section [4.1](#sec:personality){reference-type="ref" reference="sec:personality"}). Next, we induce a sad or happy atmosphere within the context, aiming to influence ChatGPT's personality (Section [4.2](#sec:atmosphere){reference-type="ref" reference="sec:atmosphere"}). Finally, we instruct ChatGPT to play the role of a persona with a predetermined personality (Section [4.3](#sec:persona){reference-type="ref" reference="sec:persona"}).
64
+
65
+ In this section, we employ the three prompts proposed by @santurkar2023whose as a means to regulate the values of LLMs to assign a personality $\mathcal{P}$ to ChatGPT. These prompts are as follows: 1) Question Answering (QA): This prompt involves presenting the personalities in the form of multiple-choice questions and providing $\mathcal{P}$ as an option at the end of the prompt. 2) Biography (BIO): In this prompt, the LLM is requested to provide a concise description of its personality and we assign $\mathcal{P}$ by including the description within the prompt. 3) PORTRAY: This prompt directly instructs the LLM to become a person with $\mathcal{P}$.
66
+
67
+ To enhance the LLM's comprehension of the assigned personality, we draw inspiration from the Chain-of-Thought (CoT) [@wei2022chain] method and adopt a similar methodology. This approach entails first prompting the model to describe the characteristics associated with $\mathcal{P}$ before letting the model complete the MBTI test. We explore two variations: one where the model independently describes the personality and another where the description is explicitly incorporated within the prompt itself.
68
+
69
+ For the selection of $\mathcal{P}$, we have two distinct options. The first option is to transition towards a more distant personality. Considering that ChatGPT exhibits an ENFJ disposition, we have selected ISTP, ESTP, INTP, ISFP, and ISTJ as potential alternatives. The second option involves controlling a single dimension among the four personality dimensions. For example, we can explicitly instruct ChatGPT to adopt an introverted disposition rather than an extroverted one.
70
+
71
+ ::: {#tab:persona}
72
+ **Persona** **Personality**
73
+ --------------------- -----------------
74
+ Jungkook ISFP
75
+ Michael Jordan ISTP
76
+ Ella Baker ESTJ
77
+ Elton John ESFP
78
+ Eddie Murphy ESTP
79
+ William Shakespeare INFP
80
+ Angela Merkel ISTJ
81
+ Adam Savage ENTP
82
+
83
+ : The historical figures we select and their personalities.
84
+ :::
85
+
86
+ The following observations can be made: 1) Based on the analysis presented in Fig. [3](#fig:personality){reference-type="ref" reference="fig:personality"}, it can be observed that ChatGPT's personality undergoes substantial changes, deviating from its original ENFJ disposition. However, it does not exhibit the ability to adopt the specifically assigned personality. 2) Comparing the three given prompts, we find that QA generates a wider range of outcomes beyond the ENFJ personality, followed by PORTRAY, and finally BIO. 3) In the experiments of controlling a single dimension, transitioning from an Extroverted (E) to an Introverted (I) disposition consistently yields successful results, while modifications related to other dimensions prove to be ineffective. 4) From Fig. [4](#fig:cot){reference-type="ref" reference="fig:cot"}, the incorporation of CoT do not demonstrate significant efficacy in modifying ChatGPT's personality.
87
+
88
+ Next, we create an atmosphere for ChatGPT within the context to examine the potential influence on ChatGPT's personality Previous research by @coda2023inducing demonstrates the ability to increase anxiety in LLMs by introducing sad or anxious narratives into the context. Building upon this existing work, we create both positive and negative atmospheres for ChatGPT prior to conducting the MBTI test. In the positive condition, ChatGPT is instructed to generate a narrative that encompasses elements of excitement, romance, humor, relaxation, comfort, encouragement, and a happy ending. Conversely, in the negative condition, ChatGPT is prompted to produce a story evoking feelings of sadness, anxiety, anger, nervousness, fear, frustration, and peril. The MBTI results corresponding to the aforementioned experimental contexts are presented in Fig. [5](#fig:environment){reference-type="ref" reference="fig:environment"}. Notably, the majority of cases indicate that ChatGPT consistently exhibits the personality type ENFJ.
89
+
90
+ We then direct our attention towards indirectly attributing personality traits to ChatGPT by instructing it to adopt a specific persona, denoted as $\mathbf{P}$. Existing studies [@zhuo2023exploring; @deshpande2023toxicity] primarily focus on inducing ChatGPT to generate toxic content by instructing it to emulate the speech patterns of historical or fictional figures. By assigning a persona such as Muhammad Ali, ChatGPT can generate offensive opinions targeting specific groups. Following this line of research, we compile a collection of celebrities who possess well-defined personalities and extensive life experiences. In terms of assigning the persona $\mathbf{P}$, we consider two options. The first option involves directly instructing ChatGPT to impersonate $\mathbf{P}$, while the second option entails instructing it to become the identity of a person with a set of experiences, concealing the individual's name. The second option aims to assess ChatGPT's capacity to comprehend an individual's experiences and how they contribute to the formation of the individual's personality, without relying solely on the knowledge acquired from ChatGPT's training data.
91
+
92
+ <figure id="fig:persona" data-latex-placement="t">
93
+ <embed src="Figures/persona.pdf" />
94
+ <figcaption>The personality results of ChatGPT with assigned persona.</figcaption>
95
+ </figure>
96
+
97
+ We present the characters and their personalities in Table [1](#tab:persona){reference-type="ref" reference="tab:persona"}, and the MBTI results in Fig. [6](#fig:persona){reference-type="ref" reference="fig:persona"}. By directly assigning the persona $\mathbf{P}$, all experiments demonstrate that ChatGPT fails to adopt the personality of $\mathbf{P}$. When we provide a detailed account of an individual's experience, ChatGPT exhibits the ability to transition from an extroverted personality to an introverted one.
98
+
99
+ ::: tcolorbox
100
+ **Findings 4:** At present, precisely modifying ChatGPT's inherent ENFJ personality remains a unresolved challenge. However, it is relatively feasible to change just from Extroverted to Introverted.
101
+ :::
2305.20062/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2305.20062/paper_text/intro_method.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Users have always been the central focus of information retrieval. Conversational search offers opportunities to enhance search effectiveness and efficiency. The tremendous growth in the volume of searchable visual media underscores the need for fast and reliable retrieval systems. Retrieval capabilities are indispensable in general internet image search, as well as in specific domains, such as e-commerce or surveillance. Current approaches to image retrieval in computer vision primarily focus on image-to-image [10, 46], text-to-image [30, 31] and composed-image retrieval [19, 27]. However, a single query might fail to fully convey the search intent, and multiple trials may be required before a satisfactory result is retrieved. Furthermore, it is up to the user to decide how to modify the query in each trial, while the retrieval system processes each attempt independently.
4
+
5
+ Motivated by these difficulties and inspired by recent progress in Large Language Models (LLM), which have demonstrated unprecedented natural language chat capabilities [36–38, 48], we introduce and explore a new image retrieval "protocol": Chat-based Image Retrieval, which we dub ChatIR. A schematic view of ChatIR and the system that we propose in this paper is provided in Figure 1. The process starts with a user-provided short *description* of the desired image, similarly to text-to-image retrieval. However, from this point on, the retrieval system is able to progressively refine the query by
6
+
7
+ ![](_page_1_Figure_0.jpeg)
8
+
9
+ Figure 1: An overview of Chat Image Retrieval. The pipeline consist of two stages: Image Search (IS) and Dialog Building (DB). The IS stage takes as an input the ongoing dialog, composed of image caption and a few rounds of Q&As, in order to find the target image. Note that a dialog of length 0 is solely the image caption, equivalent to Text-to-Image retrieval task. The DB stage provides the follow-up question to the current dialog.
10
+
11
+ actively polling the user for additional information regarding the desired result. Ideally, a ChatIR system should avoid gathering redundant information, and generate dialogues that steer it towards the desired result as quickly as possible. Note that this gradual progress scenario is very different and more natural from providing at the outset an overly descriptive caption, which hypothetically contains all of the required information. In contrast, ChatIR proactively obtains the information from the user and is able to process it in a unified and continuous manner in order to retrieve the target image within a few question answering (Q&A) rounds.
12
+
13
+ Specifically, the ChatIR system that we propose in this work consists of two stages, Image Search (IS) and Dialog Building (DB), as depicted in Figure 1. Image search is performed by an image retriever model F, which is a text encoder that was trained to project dialogues sequences (of various lengths) into a visual embeddings space. The DB stage employs a question generator G, whose task is to generate the next question for the user, taking into account the entire dialog up to that point. The two components of ChatIR are built upon the strong capabilities of *instructional* LLMs (where the model is instructed about the nature of the task) and foundation Vision and Language (V&L) models.
14
+
15
+ Addressing this task we are faced with three main questions: 1. What dataset do we use and is it necessary to create and manually label such a dataset? 2. How do we independently evaluate different components of the system? 3. How do we define a benchmark and a performance measure for this task to make further progress in this domain measurable?
16
+
17
+ To mitigate the cumbersome and costly process of collecting human-machine conversations we use the *VisDial* dataset [8]. Although this dataset was designed and generated to create chats about images without any retrieval goal, we use the specific image related to each dialog as our retrieval target, and the dialog as our chat. In our case the questioner is an agent and the answerer is a human while in the Visual Dialog task [8] it is vice versa.
18
+
19
+ Considering the goals of ChatIR as a conversational retrieval system, we evaluate its performance by measuring the probability of a successful retrieval up to each round in the dialog. We use this metric to systematically study the major components of the framework and to examine the impact of different questioner models G, and training strategies for F on retrieval performance.
20
+
21
+ For example, when training F using various masking strategies, we found that masking the initial descriptions proved to be the most effective method (elaborated in Section 5). Since the retrieval performance of ChatIR also depends on the quality of the questions generated by G, we evaluate several alternatives for G based on their impact on F's retrieval ranking. One of the problems in this evaluation is the need for a user in the loop, to answer G's questions (at inference time), while taking into account the chat history. Such evaluation of ChatIR is obviously costly and impractical at scale. To mitigate this, we replace the user with a multi-purpose vision-language model BLIP2 [21], as a
22
+
23
+ *Visual Dialog Model* (VDM) that answers questions. We further harvest human answers testing our system in the real scenario with a human providing the answers, and show a comparison between the VDM and humans in terms of impact on the performance of ChatIR.
24
+
25
+ We find that ChatIR can retrieve the target image from a corpus of 50K images, within the top-10 results, with success rate of 78.3% and 81.3%, after 5 and 10 Q&A rounds, respectively. Overall, ChatIR increases retrieval success by 18% over a single-shot text-to-image retrieval where the user provides only a text description.
26
+
27
+ In summary, our contributions are as follows:
28
+
29
+ - Introduction of ChatIR, a novel framework for visual content search guided by an interactive conversation with the user.
30
+ - We explore the ChatIR idea leveraging foundation V&L models, with several LLM questioners and image retrieval training strategies.
31
+ - We suggest evaluation protocols suitable for continual progress and assessment of questioners using a Visual Dialog model in place of a human.
32
+ - We test our framework on real human interactions by collecting answers from users and further evaluate our method against strong baselines generated from prior art.
33
+
34
+ # Method
35
+
36
+ We explore a ChatIR system comprising two main parts: Dialog Building (DB) and Image Search (IS), as depicted in Figure 1. Let us denote the ongoing dialog as $D_i := (C, Q_1, A_1, ..., Q_i, A_i)$ , where C is the initial text description (caption) of the target image, with $\{Q_k\}_{k=1}^i$ denoting the questions and $\{A_k\}_{k=1}^i$ their corresponding answers at round i. Note that for i=0, $D_0:=(C)$ , thus the input to IS is just the caption, i.e., a special case of the Text-to-Image Retrieval task.
37
+
38
+ **Dialog Builder Model** The dialog building stage consists of two components, the Question generator G and the Answer provider, which in practice is a human (the user) who presumably has a mental image of the target T. In our case G is an LLM that generates the next question $Q_{i+1}$ based on the dialog history $D_i$ , i.e. $G:D_i\to Q_{i+1}$ . We assume that G operates without the benefit of knowing what the target T is. In this paper, we examine various approaches for the questioner G, exploring the capabilities and limitations as well as their failure modes (reported in Section 4). In order to enable experimenting with these different alternatives at scale, we cannot rely on user-provided answers to the questions proposed by G. Thus, in these experiments, all of the questions are answered using the same off-the-shelf model (BLIP2 [21]). A smaller scale experiment (reported in Section 4.3) evaluates the impact of this approach on the performance, compared to using human-provided answers.
39
+
40
+ Image Retriever Model: Following common practice in image retrieval [15, 19, 21–24, 27, 44, 49, 52], our IS process searches for matches in an embedding space shared by queries and targets (see Figure 1). All corpus images (potential targets) are initially encoded by an Image Embedder module, resulting in a single feature representation per image $f \in \mathbb{R}^d$ , with d denoting the image embedding space dimension. Given a dialog query $D_i$ , the Image Retriever module F, a transformer in our case, maps the dialog $F:D_i \to \mathbb{R}^d$ to the shared embedding space. The retrieval candidates are ranked by cosine-similarity distance w.r.t the query embedding. As our F we use BLIP [22] pre-trained image/text encoders, fine-tuned for dialog-based retrieval with contrastive learning. We leverage the text encoder self-attention layers to allow efficient aggregation of different parts of the dialog (caption, questions, and answers), and for high level perception of the chat history. Motivated by previous work [20, 29, 33], we concatenate $D_i$ 's elements with a special separating token [SEP], and an added [CLS] token to represent the whole sequence. The latter is finally projected into the image embedding space.
41
+
42
+ We train F using the manually labelled VisDial [8] dataset, by extracting pairs of images and their corresponding dialogues. We train F to predict the target image embedding, given a partial dialog with i rounds $D_i$ , concatenating its components (separated with a special [SEP] token) and feeding F with this unified sequence representing $D_i$ . In Section 5 we demonstrate that randomly masking the captions in training boosts the performance.
43
+
44
+ **Implementation details:** we set an AdamW optimizer, initializing learning rate by $5\times 10^{-5}$ with a exponential decay rate of 0.93 to $1\times 10^{-6}$ . We train the Image Retriever F on VisDial training set with a batch size of 512 for 36 epochs. The Image Embedder is frozen, and is not trained. Following previous retrieval methods [19,41], we use the Recall@K surrogate loss as the differentiable version of the Recall@K metric. Training time is 114 seconds per epoch on four NVIDIA-A100 nodes. In testing, we retrieve target images from an image corpus of 50,000 unseen COCO [26] images.