Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2010.01693/main_diagram/main_diagram.drawio +1 -0
- 2010.01693/main_diagram/main_diagram.pdf +0 -0
- 2010.01693/paper_text/intro_method.md +164 -0
- 2102.11582/main_diagram/main_diagram.drawio +1 -0
- 2102.11582/main_diagram/main_diagram.pdf +0 -0
- 2102.11582/paper_text/intro_method.md +222 -0
- 2103.14517/main_diagram/main_diagram.drawio +0 -0
- 2103.14517/paper_text/intro_method.md +150 -0
- 2108.13702/main_diagram/main_diagram.drawio +0 -0
- 2108.13702/paper_text/intro_method.md +128 -0
- 2201.09871/main_diagram/main_diagram.drawio +1 -0
- 2201.09871/main_diagram/main_diagram.pdf +0 -0
- 2201.09871/paper_text/intro_method.md +9 -0
- 2203.17008/main_diagram/main_diagram.drawio +0 -0
- 2203.17008/paper_text/intro_method.md +51 -0
- 2204.04303/main_diagram/main_diagram.drawio +1 -0
- 2204.04303/main_diagram/main_diagram.pdf +0 -0
- 2204.04303/paper_text/intro_method.md +95 -0
- 2204.10670/main_diagram/main_diagram.drawio +1 -0
- 2204.10670/main_diagram/main_diagram.pdf +0 -0
- 2207.09666/main_diagram/main_diagram.drawio +1 -0
- 2207.09666/paper_text/intro_method.md +96 -0
- 2208.07639/main_diagram/main_diagram.drawio +1 -0
- 2208.07639/main_diagram/main_diagram.pdf +0 -0
- 2208.07639/paper_text/intro_method.md +52 -0
- 2208.08519/main_diagram/main_diagram.drawio +1 -0
- 2208.08519/paper_text/intro_method.md +58 -0
- 2210.14986/main_diagram/main_diagram.drawio +1 -0
- 2210.14986/main_diagram/main_diagram.pdf +0 -0
- 2210.14986/paper_text/intro_method.md +52 -0
- 2210.16541/main_diagram/main_diagram.drawio +1 -0
- 2210.16541/main_diagram/main_diagram.pdf +0 -0
- 2210.16541/paper_text/intro_method.md +172 -0
- 2211.11979/main_diagram/main_diagram.drawio +1 -0
- 2211.11979/main_diagram/main_diagram.pdf +0 -0
- 2211.11979/paper_text/intro_method.md +106 -0
- 2301.04791/main_diagram/main_diagram.drawio +1 -0
- 2301.04791/main_diagram/main_diagram.pdf +0 -0
- 2301.04791/paper_text/intro_method.md +214 -0
- 2301.13573/main_diagram/main_diagram.drawio +1 -0
- 2301.13573/main_diagram/main_diagram.pdf +0 -0
- 2301.13573/paper_text/intro_method.md +90 -0
- 2303.04488/main_diagram/main_diagram.drawio +1 -0
- 2303.04488/main_diagram/main_diagram.pdf +0 -0
- 2303.04488/paper_text/intro_method.md +14 -0
- 2305.01738/main_diagram/main_diagram.drawio +1 -0
- 2305.01738/main_diagram/main_diagram.pdf +0 -0
- 2305.01738/paper_text/intro_method.md +114 -0
- 2305.10855/main_diagram/main_diagram.drawio +0 -0
- 2305.10855/paper_text/intro_method.md +64 -0
2010.01693/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-09-09T14:08:39.952Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36" version="13.6.9" etag="kUkLCvNqZy8I7lE13upA" type="google"><diagram id="N_FKkWPXiU-U4rocl70O">7VrLkto4FP0aljNl+SHkJQ10JlXJVNeQ1CRLYQujRLYoWTSQr4+E5Rdy0p1q25CuwALp6GH53KN7fS0m3jw9vhF4t33PY8ImrhMfJ95i4rqu4wXqRyOnAoGuARJB4wICNbCi34gBHYPuaUzyVkfJOZN01wYjnmUkki0MC8EP7W4bztpX3eGEWMAqwsxG/6ex3BYoCpwa/4fQZFteGTimJcVlZwPkWxzzQwPylhNvLjiXRSk9zgnT5JW8FOPuf9BaLUyQTD5ngFsMeMRsb+5t4kKmht6tVSHRhQXFjCeq2XXe40zdmii7qGmrXuZ25KnkSPB9FhN9GaCaD1sqyWqHI916UKpQ2FamzDRvKGNzzrg4j/U2KCJRpPBcCv6VNFrWKPAVzd7dIxGSKovMGE0y1Sb5rlqFbiPHHzICKp6VQAlPiRQn1cUM8EorGm1OfVM/1Jb2ocG2DSu7ngGxUVdSzV0bQBWMDbrt4Vk8klhJz1S5kFue8AyzZY3e1UxrYuo+77jm5MzvFyLlyewjvJe8zT45UvmpUf6sp/o7MLXF0cx8rpxMpVinXlyL55zvRWQg3+xMLBJiOELPtoYgDEv62J79Jcz6Tyv933cfB5d2jAnadEobRoisN0NKG8ALaYeBJW0QdEgb9CHtwDLA20yqlecv47cPXlCbF+R28AI6ePF7oAVatCwzSSUlN8gLHJGX6SvyhMj2hMC/nitET7vClcRSR/wPAkdfaZb07xe7vdyFt7y/X8L5vMtbLqbhneP0o3PvQufTLp2HXX4x7EHooWWN2bn9ZtyADy7oQVObHncgN1Be26JnwVNMs9tjB7ljsgMsdm6WFzgmL3ZeU/Dy1wPDmWpYMa4fPCBO9Z1n63z3Yoc2AGehMyZnryn3AB3JBwivF3LBM9KPWSQp1+J84IxGp+FTkYCg2O8KrshdexAOmYpAB1wx5AI7F5k9vFWAMsH1PSf0LyMKGtEL2PlIQc1/JN+zW2QHjsnO1I4r59djt6qd0BmTHfS7RBDFpjh9alYao3S1Hnau/UrkCe3IU75PuErksfOLjhdfb66U4C3BIlhOu2JQCKfeDPazK9DFrhj3xZdrpzAfSLpj5wz7yv4CQXgRazybGccfxl+4dvaiosyOZ/kNEgPHJMb940irs5CmI73iS7NyOQ2xfsz1WViPT+SbQH87DwfOHz2CZ7KBF5+esk+/rfiOgy800JNDadjfSfC1xj+Xc7xQ8EHHk4PXv+LPQ2dC4FOjw45TcxRjZn7QQC0OUD01Vg7x4mT5qQFh+PMByP9Zd1Uollzrqbr350nMzvlWp1yS9BXtYOBcGmnMPWxnjvpl2+3lRFM0XE6kqvWfNwqV1n+B8ZbfAQ==</diagram></mxfile>
|
2010.01693/main_diagram/main_diagram.pdf
ADDED
|
Binary file (36.5 kB). View file
|
|
|
2010.01693/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
DLGNet-Task is broken into three modules as depicted in Fig. [1](#dlgnet_module){reference-type="ref" reference="dlgnet_module"}, the Natural Language Understanding (NLU) module, the dialogue management module and the Natural Language Generation (NLG) module. Below we describe the components of each of these modules in more detail.
|
| 4 |
+
|
| 5 |
+
The NLU module contains components for identifying intents and entities in the incoming dialogue. It is worth noting that although our NLU model has access to multi-turn context, it only makes predictions relevant to the current turn of the conversation.
|
| 6 |
+
|
| 7 |
+
:::: table*
|
| 8 |
+
::: tiny
|
| 9 |
+
**User Prompt** **Ground Truth Response** **DLGNet-Task Response**
|
| 10 |
+
------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------
|
| 11 |
+
I am looking for places to go in the centre of town. I have 44 matches for central area, I can narrow it down by type or entrance prices. Okay. My computer shows forty-four attractions in the centre of town. Can you give me some more details so we can narrow the search?
|
| 12 |
+
How about a park? that should be Cambridge university botanic gardens located in bateman street. can i give you the phone number? We have cambridge university botanic gardens. Would you like more information about them?
|
| 13 |
+
Yes, I'll need the number along with the address and postal code. The phone number is 01223336265. The entrance fee is 4 pounds, phone number is 01223336265 and postcode is cb21jf, and the address is bateman street.
|
| 14 |
+
I also need the address please. The address is Bateman Street in the centre area of town. Postcode is cb21jf, address is bateman street, and you can reach them on 01223336265.
|
| 15 |
+
Can I get the postcode too please? The postcode is cb21jf. Do you need anything else? I apologize, the postcode is cb21jf, can I assist with anything else?
|
| 16 |
+
I'm also looking for a place to stay in the same area as Cambridge university botanic gardens Would you prefer a guesthouse or a hotel? We have 5 places to stay all in the city center. Would you like to hear about some of our other options?
|
| 17 |
+
A guesthouse please and with a star rating of zero. The only place available is the El Shaddai. Would you like for me to book a room? Cityroomz is a moderately priced guesthouse. Would you like me to book a room for you?
|
| 18 |
+
Actually could you get me the postcode, their price range, and whether they have free parking? The postcode is cb11eg they offer free parking and is cheaply priced. El Shaddai is in the cheap price range, and their postcode is cb11eg. Is there anything else I can assist you with today?
|
| 19 |
+
Thank you very much. that should be it. I can also recommend restaurants and arrange transportation. Do you need help with these? Thank you for contacting us and have a nice day.
|
| 20 |
+
No, thanks. I have everything I need. Ok! Have a great day! Great. Glad to help.
|
| 21 |
+
:::
|
| 22 |
+
::::
|
| 23 |
+
|
| 24 |
+
:::: table*
|
| 25 |
+
::: tiny
|
| 26 |
+
**Ground Truth** **Model Output**
|
| 27 |
+
-------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 28 |
+
**User** Yes, Ill need the number along with the address and postal code. Yes, Ill need the number along with the address and postal code.
|
| 29 |
+
**Domains** attraction attraction
|
| 30 |
+
**Slots Search** domain:attraction, area:centre, name:?, type:park domain:attraction, area:centre, name:?, type:park
|
| 31 |
+
**API Actions** domain:attraction, search-execute domain:attraction, search-execute
|
| 32 |
+
**Results** domain:attraction, \['Choice', 1\], \['Sample', \[{'id': '15', 'address': 'bateman street', 'area': 'centre', 'entrance': '4 pounds', 'name': 'cambridge university botanic gardens', 'phone': '01223336265', 'postcode': 'cb21jf', 'pricerange': 'moderate', 'openhours': 'it opens from 10:00 a.m. to 6:00 p.m. from april to september, from 10:00 a.m. to 5:00 p.m. in february march and october, and from 10:00 a.m. to 4:00 p.m. from november to january', 'type': 'park'}\]\] domain:attraction, \['Choice', 1\], \['Sample', \[{'id': '15', 'address': 'bateman street', 'area': 'centre', 'entrance': '4 pounds', 'name': 'cambridge university botanic gardens', 'phone': '01223336265', 'postcode': 'cb21jf', 'pricerange': 'moderate', 'openhours': 'it opens from 10:00 a.m. to 6:00 p.m. from april to september, from 10:00 a.m. to 5:00 p.m. in february march and october, and from 10:00 a.m. to 4:00 p.m. from november to january', 'type': 'park'}\]\]
|
| 33 |
+
**DLG Actions** domain:attraction, search-noerror-inform:\[\['Phone', '01223336265'\]\] domain:attraction, search-noerror-inform:\[\['Addr', 'bateman street'\], \['Phone', '01223336265'\], \['Post', 'cb21jf'\], \['Fee', '4 pounds'\]\]
|
| 34 |
+
**Delexicalized Output** the phone number is \[attraction_phone\] . the entrance fee is \[value_count\] pounds , phone number is \[attraction_phone\] and postcode is \[attraction_postcode\] , and the address is \[attraction_address\] .
|
| 35 |
+
**System Output** The phone number is 01223336265. The entrance fee is 4 pounds, phone number is 01223336265 and postcode is cb21jf, and the address is bateman street.
|
| 36 |
+
:::
|
| 37 |
+
::::
|
| 38 |
+
|
| 39 |
+
***Intent Recognition:*** The intent recognition maps the customer utterance to a label. For example, in Table [\[tb:samples_dlgnet\]](#tb:samples_dlgnet){reference-type="ref" reference="tb:samples_dlgnet"}, " I would like to book a reservation at Caffe Uno" would be mapped to the label *request_booking*. One of the limitations of the existing NLU module for natural multi-turn dialogue is the requirement to label every utterance within a conversation even when it is not appropriate. For example, mapping "I am looking for information in Cambridge." to a label might be unnecessary. In this case, it is more beneficial to focus on giving an appropriate response rather than the intent recognition.
|
| 40 |
+
|
| 41 |
+
***Entity Recognition:*** In our framework, we separate named entity recognition (NER) from slot filling in order to be compatible with existing industrial modularized pipeline architecture. Our NER component assigns value to an entity without identifying the domain. Not predicting the domain at this stage makes the NER compatible with both conversational on non-conversational datasets.
|
| 42 |
+
|
| 43 |
+
The dialogue management module has two broad functions, state tracking and action policy. Each of these functions are comprised of four components. State tracking maintains *All Entities*, *All Domains*, *Domains*, and *Domain-Plan Slots*, while the Action Policy maintains *Plans*, *API Actions*, *API Results*, and *Dialogue Actions*.
|
| 44 |
+
|
| 45 |
+
***[State Tracking]{.underline}***
|
| 46 |
+
|
| 47 |
+
The state tracking handles the multi-turn understanding by mapping the context to a well defined representation of the dialogue system ontology e.i., `DOMAIN, PLAN, SLOT, ENTITY, VALUES, CONSTRAINTS`. The state tracking identifies all the entities and domains in the dialogue history, predicts the active domain, and fill the slots of the active domain using the entity information. This arrangement allows for the verification of slot values with the options and constraints provided in the system ontology.
|
| 48 |
+
|
| 49 |
+
***All Entities:*** We introduce an *all entities* information node that adds the currently recognized entities (if any) to the previous all entity state i.e., $AE_t = AE_{t-1} + E_t$. With this, we can easily verify or replace the generated *all entities* at any conversation turn.
|
| 50 |
+
|
| 51 |
+
***All Domains:*** We also introduce an *all domains* information node that maintains a running list of the domains. This node adds the domain $D_t$ in the current turn (if any) to the previously recognized *all domains* state $AD_{t-1}$, i.e., $AD_t = AD_{t-1} + D_t$. With this, we can easily verify or replace the generated *all domains* values at any conversation turn.
|
| 52 |
+
|
| 53 |
+
***Domains:*** Similar to @Ham2020, we predict active domains at the current turn, which is a subset of the all domains node above. The MultiWOZ dataset consists of five domains, hotel, train, restaurant, hotel, and attraction. We support both single and multi-domain interactions over multiple turns of conversation.
|
| 54 |
+
|
| 55 |
+
***Domain-Plan Slots:*** The slots are divided into three, informable, requestable and book slots. Informable slots represent user constraints and requestable slots hold additional information that the user wants to obtain. Book slots are used to reserve a place recommended by the system or selected by the user [@Ham2020]. In order to generalize the slot types to new use cases, we map these categories to their functions (plans). That is, we map informable and book slots to search and booking slots respectively, indicating what the slots are being used for. The requestable slots remain the slots to hold additional information that the user wants to obtain. We predict the plan slots for each predicted domains.
|
| 56 |
+
|
| 57 |
+
In order to avoid an open ended generation of the dialogue state as in existing work [@Budzianowski2019; @Ham2020; @Peng2020; @Hosseini-Asl2020] and improve generalization to new domains, we provide the model with a list of slots for each plan type, and indicate if each slot is filled or not. During inference, with a new domain or plan type, we only need to provide an appropriate slot list, and the model can easily fill it based on the utterance and entity information.
|
| 58 |
+
|
| 59 |
+
***[Action Policy]{.underline}***
|
| 60 |
+
|
| 61 |
+
The action policy predicts the sub-domain plans relevant to the current turn of the conversation. It also predicts the API actions, and conditioned on the results of the API action, predicts the dialogue act.
|
| 62 |
+
|
| 63 |
+
***Plans:*** For each domain, we predict the relevant plans for the current conversation turn. For the MultiWOZ dataset, we have two main plans with slots (search and booking), and others without slots (welcome, greet, bye, and reqmore).
|
| 64 |
+
|
| 65 |
+
***API Actions:*** For each plan, we determine the appropriate API actions. In the MultiWOZ dataset, we observe two actions, execute (for both search and booking plans), and retrieve (for booking plan). Therefore, our API action uses the format `[PLAN-ACTION]` for each domain as shown in the example in Table [\[tb:samples_dlgnet\]](#tb:samples_dlgnet){reference-type="ref" reference="tb:samples_dlgnet"}.
|
| 66 |
+
|
| 67 |
+
***API Results:*** For each API action, we call the plan's API with filled plan type slots. For the MultiWOZ dataset, in the case of search, we return the number for matches, as well as top k results. In our ablation studies, we considered $k = \{1, 3, 5 \}$. For booking, we use the results from the dialog-act and/or the requestable slots in the metadata.
|
| 68 |
+
|
| 69 |
+
***Dialogue Actions:*** For each plan, we select appropriate action(s) among inform, request, recommend, select, book, offerbook, offerbooked. We also identify three status codes, nobook (booking error) and nooffer (search error) and noerror (otherwise). The errors were originally identified as actions in the MultiWOZ dataset, but we assign the corresponding actions as inform in our modification. Therefore, our dialogue action uses the format `[PLAN-STATUSCODE-ACTION]` for each domain with appropriate slot information as shown in the example in Table [\[tb:samples_dlgnet\]](#tb:samples_dlgnet){reference-type="ref" reference="tb:samples_dlgnet"}.
|
| 70 |
+
|
| 71 |
+
Also, we noticed that booking a train requires an offer confirmation before the system can make the booking. This behavior may be required in commercial applications where customer confirmation may be legally required before creating an order or executing a particular plan. To provide a trigger signal to the model, we added a 'confirm' slot to the booking slots of train domain (see Table [\[tb:samples_dlgnet\]](#tb:samples_dlgnet){reference-type="ref" reference="tb:samples_dlgnet"}). Filling of this slot consequently triggers the 'offerbook' action. Since this data modification is an after thought, the entity recognition and slot filling are based on heuristics, which is imperfect and introduces noise into the training data. However, this modification is necessary to fully describe the DLGNet-Task framework.
|
| 72 |
+
|
| 73 |
+
DLGNet-Task handles both template and system response generation. In order to do this, we delexicalized all the values of requestable slots (reference number, name, postcode, phone number, address) as `[DOMAIN_SLOTNAME]` (e.g. `[train_reference]` for train's booking reference that appear in the dataset) similar to Ham et al. . Unlike in the existing work, where post processing routine is used to string-replace the delexicalized token later by the real information from the API results, the DLGNet-Task model directly generates the final system response. This makes DLGNet-Task a truly end-to-end neural TOD system.
|
| 74 |
+
|
| 75 |
+
Since DLGNet-Task model is a word-token sequence generation model, the traditional decoding approach is to explore sequence decoding strategies such as greedy decoding, beam-search decoding, top_k sampling and top_p sampling strategies [@Holtzman2019; @Olabiyi2019c]. However, TOD systems contains both natural language and several ontology-driven key-value pairs, such as graph node-value, intent-value, entity-value, slot-entity, domain-value, plan-value, plan-API action, plan-dialogue action pairs. The ontology-driven key-value pairs provide opportunities for discrimination, since some of the key and possible values may be known *apriori* from the system ontology. Note that the ontology itself is not used during training, it is only used here to ground value generation or selection during inference. For example, given the triples $(C, K_i, \{V_i\}_j^J)$ of context $C$, key $K$ and possible values $V$, we can estimate the likelihood of each possible value $V_i^j$, i.e., $$\begin{align}
|
| 76 |
+
P_{\theta}\big(V_i^j |K_i, C\big) = DLGNet([C, DL_{key}, K_{i}, DL_{value}, V_i^j])
|
| 77 |
+
\label{eq:dlgnet_score}
|
| 78 |
+
\end{align}$$ where $DL$'s are delimiter tokens.
|
| 79 |
+
|
| 80 |
+
The likelihood scores can be used to rank possible values during inference, which would also improve generalization to new key-value pairs. Using the likelihood score in Eq. ([\[eq:dlgnet_score\]](#eq:dlgnet_score){reference-type="ref" reference="eq:dlgnet_score"}), a normalized conditional distribution over the value options can be estimated, i.e., $$\begin{align}
|
| 81 |
+
P\big(V_i^j | K_i, C\big) = \frac{\exp\big(\frac{1}{T_i}\log P_{\theta}(V_i^j |K_i, C)\big)}{\sum_j^J \exp\big(\frac{1}{T_i}\log P_{\theta}(V_i^j |K_i, C)\big)}
|
| 82 |
+
\label{eq:dlgnet_prob}
|
| 83 |
+
\end{align}$$ where the hyperparamter $T_i\in(0,1]$ is the decoding temperature.
|
| 84 |
+
|
| 85 |
+
DLGNet-Task system is implemented with the use of autoregressive transformer networks, such as GPT-2 and DLGNet. To do this, we need to convert the dialogue data to word tokens using the information flow graph. In the case of the MultiWOZ dataset, 'metadata' and 'dialog-act' correspond to the current dialogue state, and the current system actions. The dataset also contains the user utterance and system response. We introduce a delimiter token for each functional block used in each conversation turn. We also introduce two special delimiter tokens, $<$`turn_sep`$>$ and $<$`conversation_sep`$>$ for turn and conversation separation respectively based on the DLGNet framework [@Olabiyi2019c].
|
| 86 |
+
|
| 87 |
+
In order to obtain the processed data as shown in Table [\[tb:samples_dlgnet\]](#tb:samples_dlgnet){reference-type="ref" reference="tb:samples_dlgnet"}, we adopted a dialogue flow process based on the information flow graph in Fig. [2](#dlgnet_graph){reference-type="ref" reference="dlgnet_graph"}. The model inference follows the same process after training. The procedure for processing the training data and model inference using the dialogue flow is described below:
|
| 88 |
+
|
| 89 |
+
Given the current turn utterance $U_t$, and the information from the previous dialogue turn $M_{t-1} = \{U_{t-1}, I_{t-1},E_{t-1},\\AE_{t-1},AD_{t-1},D_{t-1},S_{t-1},P_{t-1},AA_{t-1},AR_{t-1},DA_{t-1},\\T_{t-1},R_{t-1}\}$:
|
| 90 |
+
|
| 91 |
+
- **NLU Functional Blocks**
|
| 92 |
+
|
| 93 |
+
1. Predict intent $I_t$ if applicable else skip.
|
| 94 |
+
|
| 95 |
+
2. Predict entities $E_t$ if applicable else skip.
|
| 96 |
+
|
| 97 |
+
- **DST Functional Blocks**\
|
| 98 |
+
Given the NLU predictions:
|
| 99 |
+
|
| 100 |
+
1. Predict all entities $AE_t$ if applicable else skip.
|
| 101 |
+
|
| 102 |
+
2. Predict all domains $AD_t$ if applicable else skip.
|
| 103 |
+
|
| 104 |
+
3. Predict active domains $D_t$ if applicable else skip.
|
| 105 |
+
|
| 106 |
+
4. Given the domain, predict plan slots $S_t$ if applicable else skip.
|
| 107 |
+
|
| 108 |
+
- **POL Functional Blocks**\
|
| 109 |
+
Given the NLU and DST predictions:
|
| 110 |
+
|
| 111 |
+
1. Predict plans $P_t$ if applicable else skip.
|
| 112 |
+
|
| 113 |
+
2. Given the plan, predict API actions $AA_t$ if applicable else skip.
|
| 114 |
+
|
| 115 |
+
3. Given the API actions, obtain API results $AR_t$ if applicable else skip.
|
| 116 |
+
|
| 117 |
+
4. Given the plan, predict dialogue actions $DA_t$ if applicable else skip.
|
| 118 |
+
|
| 119 |
+
- **NLG Functional Blocks**\
|
| 120 |
+
Given NLU, DST and POL predictions:
|
| 121 |
+
|
| 122 |
+
1. Predict delexicalized template $T_t$ if applicable else skip.
|
| 123 |
+
|
| 124 |
+
2. Predict system response $R_t$.
|
| 125 |
+
|
| 126 |
+
It is worth mentioning that the introduction of NLU functional blocks, and all entities, domain, plan, and template functional blocks helps to control, verify and explain the system response.
|
| 127 |
+
|
| 128 |
+
For training DLGNet-Task model, we finetune GPT-2 pretrained model using modified MultiWOZ dataset based on the input representation described above. In our experiments, due to the use of turn and conversation separations, we only use the objective of autoregressive language modeling for joint distribution modeling with random informative padding [@Radford2019; @Olabiyi2019c], i.e.,
|
| 129 |
+
|
| 130 |
+
$$\begin{align}
|
| 131 |
+
L_{DLGNet}(x_1,x_2,\cdots,x_n\big) = \sum_{i} \log P_{\theta}\big(x_i |x_1, \cdots, x_{i-1}\big)
|
| 132 |
+
\label{eq:dlgnet}
|
| 133 |
+
\end{align}$$
|
| 134 |
+
|
| 135 |
+
using the Adaptive Moment Estimation (Adam) stochastic gradient descent algorithm with a learning rate of 0.0001 with a maximum sequence length is 1024. Due to GPU memory limitations, we use a batch size of 2 and accumulate gradients over 5 iterations, making the effective batch size 10. The models are trained until the training perplexity on the dialogue datasets reaches a steady state.
|
| 136 |
+
|
| 137 |
+
::::: table*
|
| 138 |
+
:::: center
|
| 139 |
+
::: small
|
| 140 |
+
**Model** $\uparrow$ **Inform** $\uparrow$ **Success** $\uparrow$ **BLEU** $\uparrow$ **Combined**
|
| 141 |
+
--------------------------------- ----------------------- ------------------------ --------------------- -------------------------
|
| 142 |
+
Baseline [@Budzianowski2018] 71.29 60.94 18.80 84.93
|
| 143 |
+
GPT-2 [@Budzianowski2019] 70.96 61.36 19.05 85.21
|
| 144 |
+
Structured Fusion [@Mehri2019] 82.70 72.10 16.34 93.74
|
| 145 |
+
SOLOIST [@Peng2020] 89.60 79.30 18.03 102.49
|
| 146 |
+
DSTC8 Track 1 Winner [@Ham2020] 77.00 69.70 16.11 89.46
|
| 147 |
+
DLGNet-Task (Ours) 75.15 57.31 18.34 87.52
|
| 148 |
+
:::
|
| 149 |
+
::::
|
| 150 |
+
:::::
|
| 151 |
+
|
| 152 |
+
::::: table*
|
| 153 |
+
:::: center
|
| 154 |
+
::: small
|
| 155 |
+
**Model** $\uparrow$ **Inform** $\uparrow$ **Success** $\uparrow$ **BLEU** $\uparrow$ **Combined**
|
| 156 |
+
--------------------------------- ----------------------- ------------------------ --------------------- -------------------------
|
| 157 |
+
Structured Fusion [@Mehri2019] 73.80 58.60 16.90 83.10
|
| 158 |
+
SOLOIST [@Peng2020] 85.50 72.90 16.54 95.74
|
| 159 |
+
SimpleTOD [@Hosseini-Asl2020] 84.40 70.10 15.01 92.26
|
| 160 |
+
DSTC8 Track 1 Winner [@Ham2020] 73.00 62.40 16.00 83.50
|
| 161 |
+
DLGNet-Task (Ours) 72.65 56.81 15.40 80.13
|
| 162 |
+
:::
|
| 163 |
+
::::
|
| 164 |
+
:::::
|
2102.11582/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-10-14T18:37:15.611Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" etag="AYtRUm6B8-o1tgD-OGXV" version="14.4.2"><diagram id="yjnzEJNiv_YyZutVGtOi" name="Page-1">3bzZsqtIti34NWlW9XDCQAIEj/Qg+h70co1OgOh70Ndfd2ntyIjMOFXnWmXeOlXaW0vIAcfx2Y0xp6O/Xdn2EKd4KLU+y5u/XZDs+NuV+9vlckEwEnzAlvPbgqLo9dtSTFX20/b3Bqd65z+NyE/rWmX5/KcDl75vlmr4c2Pad12eLn9qi6ep3/982LNv/nzVIS5+roj8vcFJ4yb/p8OCKlvKbyuJ/+FoKa+K8teVUeRnTxv/OvinYS7jrN//0HTl/3Zlp75fvlvtweYNnL1f8/I9T/hP9v4+sCnvlv/KCVbByMbxP7Ts+fQccy7cJ97/B/7tZYub9eeG/3YhGtAf8+xBt2DUy/kzFcS49r92/Mf8ERQNDkDJ4fj7TrBV/Hx+ekl+NWi67Li/WsEgk388ErR9L/mr+fKnq1+W/IDt5dI2oAEFm3FTFR3YTsEE5BNo2PJpqYDo6J8dbZVl8HRmysF44+TTFQK+D33VLR/FwJm/4Rzsa1367z19uoYjYfumnz7Xvl5YmhKE30f1xzn/EQO8dH78oelHBmLet/kyneCQXzZxQX+jsO9ZP0ZxuRLf7/vfNez2ozXlH5TrlybFPzpd/N7538UONn4k/7+gBcS/Rgs+wvwHRaDbpCrWfp3/4x814L+XtJ2fa//+/Wc3+c/agF4xiuD/NdpwJajfqD++/lEz0H/SDJT8C9W4/rtUg/w3OQghnsuq735pBfJ/GD33f/7/QTuez9sTyf812oHj1G8E/n+rERfkf6dGUP+pRvyTT/9/piJcNS3n3xWk+r/Qj78KK/9v6kwxxVmV/4NewNdfeBPhdmOwf1FsIdHfiNsfvMk/6M4vPfmjN8H/dwYa9K8izT8ICuCkAW6m69SczBSndQ4Ft5fVkjtDnMJ9O0Cbfxbm1K9dlme/f1viBbgX8JWCApuXqa9/B3BX2PIVFvIbBtHcDcU+f683gvj98D8CAIa6csy/GRjgl98u1J8EhhL4b+jtn2RG/fbLCfxRaCiF/Iaj/ybB/dKI/4rgqvaDqX83JzVO8sYE5vEjkaRflr4FBzRwBwMEXHyk91fG8o8mufRQ7vE8fLH+szqgzJnPJelfrcivFrCdxUsMnMz360UYuuJvF7byGcPeEUUsehq8dMcrea8AWz78SqcsHcHPK5NnBNwwQlsIJNtNLg8kuwjnw2L0KLT3RATadcEROhGbSpYeTdrpQ3LB3upLW7VzL2LJRlJJI9STWn6dAI9VW+p8AKkIcnPLU3gJNmTkINTAFlV+ugQvM7LE3iSyAWw3mdfwlm9jnYFS+aN00QwQKeH+TsEH8yYeZ3w+xEfyEO2HENIMU+1bMCWyJ7CvYhvv/afT0Q6e8vSoCELrz10bF9/vh1jpbU9ghqj06ECko9AQH4+C0nD9fu+WNRlUoILMxczT2/uuavWcZODKV/99Mz+d1pf0PBO0W8FRGfnuUqq2aCtl+DC1JXAk+mDkO+iXGQT5biTvB05RQhNPgk2YDEJ0PJgIhiR5VaP2N2cxNP/pluVVcHLMW2MTxsP8TndO1ntdH7e168fzMqLYKkoS8pDsPQU9gIO38gX+mhwHdEiw4spznFkEm+yRPQWUDMPj8cQ+nevhlchY3qmssrIV3ZbtUWerJZiM8tw8qxJ9hmKPdBWEIQ5653abBlVhyU3AIl08IxJe7PUqlm0R1kT7dNm6Jphk4aErgyXTUZ5obaqKL3sWypbnR98awV6DN58H27qPuaUZOirrtp3ALArkfpvAyJgVOACGC90jDHf9vVTfGZbeFFBexrzdblgoHnVU8nQhJ1OWzPq2FMtzEp4ggDIbxcWXakAyyZILj6FTrkj3uXXvWJAkN+JxHcgaSx3y02nY3fBHLK5TjclMr7HHlWffYChwS2A1pvCsUbR2SUzBxXhGAbtCBbVDf1+FMvK420sfBqqZhddjveqv5tMpGMTNn9C9HxtnepjbtQMawKgC9GLC9f7wFSb2HSWhZ9AM5b5V52FVns2G4It1W0HvjuI7RSBG4AxcqxWgNYz8+BrFOoERMtcQKZHSGbrrDr49twPDshNPkdYn8zhJ9omJBA7uCk8WfvDWTYpvmGQRzOOp3sOClEaR2vxrTxiF9en4smxJmSWBP0tDRg/5hKZgUkyWOSgSUagLj7VBrYYqnk8XPIxvzAMq2h2zWTmVOnJvH+2j5sSKPO11Cz9dTkfeuji5Alv0lNLLDlJlrVbyx8Y7UKgokVARlX+j5+oUuLLlpFd5AXpo9KNS0fZo4u47n64XoCXLSHy6xCUZ15leX4R+afi+uTPAQsnlFdgPZbDv52rvY2P1YrEL8WRHRGfh8PNsrf1s40MVL9YpAkUUPsoIOhRy+zTsSgSyrgg/TfJCG+bbFkZ3a2m8sfTGtgAejmkLf1xLj62I21MvipaPTsM/oaBohb4cYoGIrfb8dHlV7w1LHi1wfwF7SYxOrnFongnf8ObaIGPTcv4kgNlRb+B9KjZrXBKgasXt4p1ibDvK563airR/zBoaAKYyskiQpBdpoqvwr7qGymOkMKpdBKyR77JivIrbbvLUMhs22Bl1Wd3yhIJ1dW4OpKlqa9WPckUkopLAno9P1zkx+TG4S7F777uV0oVI7465ZKGP6CuFGCMVxIh+i+/7nfoMh7lcteAaL+otJTsyrs/MPGaRY67S5SZ06afTVNN0nraYHsnNY5AbZ394Sfe46HWsi32gi++8Wqc+Dm7ZCiwK1f14DjhBVp+k/RCVU6VX3e6NtGN27yuoI33p7zS8Ox6YP6akx4fWRkppYRk7+Cra+oXMAq3Nh1K7l5Vury8UYAMgMfCGdze/mnSaMEdYvx06leexRYjk3VXD1nPSEIVctbYCdq8tLAJVSOwPflCcem9JZG+9twWUEGqO7yBmeFleBxs5o8/Gbf9VpqpfVc9/BOX81lzRboBzQRbGfgR+ZxGhY52Bw20V0FtUBE5dWDsH2vL1it2W9pqUk969CWWdihsTMWz36dIdrmz+nXWqxM+0E28q/C4Kwsm6p83ebY3eOQYJu/VZZtNWUtAzQ3RHPl9kVVsvFnOZqClCdS4qacehHwMdD34FXQdX03fukvsoigoRUDGhCl6OoFs23cdRJ73c9wvMEiLTovjwa3pXjscM0CfGS4T4aK4PwTguUCO+QeWWJEm03oZlOmplvPQjzwyx00RN7cOLvfDLbdnvE/VMMOhMr9du3pb2BoAPGIj3Pj7KzCzF7Fd7ks2XUFLvn47vrsg5N6Y+bdVDs7CckmdONAbuEZlxOJfrVZY/NvagRyp03lhqSjtG54Zyv9+YATnj1OxQArsjVhHw0adL5e4rULgyp715HLgvl444pkDj0lk3866Uml52XHnQy3pJUgOO/zlbLG1ZXJ+6TbXaDsuGJyFYyp0NPl02BXDtvU/LPAMxiURjLzW+6B1wt1OPyTgvB/dY8MdFyLuegNsuGwO4Dm5ZUBiH/WzpjiI7VeTmd9z/iqr346nPnlP9iINyATbAj4EJQi7z6gmNQ4LCKyrOMpYpvjTMM/z6/T3I8YOXGbZ4UP7jumnOoxliEUsN6dPpflEOiKrQoECqaSSMcidzhr6052WYvV20pQHothOZ4XyhpJ6yeyQuvXnbbKjPjaUwbmHQBEBMr29kSpkif+1p/LLiRIdgWwQANkMJj7Qa63qVaK5f+P4ergZGlNdQaTy6oLP++qpwCfgncD/CaxzXle7Z2jQm/Fc0fdYwOFuMIiXvyJ+9mptJHOApueTlBYQC6dX3Ub4ny4UopQD6Pay+du6uVmTATv56RTq+uOLmxB3YN+5X92XbXk3ZHbJDyLLAvgEIEzgZByZyuaXm5ZZtN9wjbWyPimc4oaReaAbXNChlXDQYznvVQQRxRMXqa5q/xsregYXeLa+Rz1j0Eo5OPh3uADPRO8tblZRm4BWo5S4HH2dOdyDOXJMBZWWfvaj24JHyCRT28emyGLI8L7oGz32k3DVCwg/sQKNe4NiHK5sBtBZOqjxNfovWKbseKjw7MHOMXdyy6jznmzZgoFXPJb6yzeLTqcESA0lfihRTgbFMNeicARssEbrA7SchipldHtlsbxe+C3QABMZwvxQIDHqjYI06ZyGItN7kYz+/PqpU7qUyQaw6eXEcNMOpbap1EUo5ApH8BcyWoUfayjnLLQAiA0bkAKfcVVQ47Ldsx4FFB4iqKCdTKN+5LA37PkgVt0es9qKPaPIf4B2Ntd8ufmn5AsQciLgB0awLYQDNDp+yeztkZXrXla+SmuW6JDdRsfL62tCude7wpiFluF30yzvfmBrV3RAG9z59PstXGHbL8tpt41J7tcorebTpnMZU9cFefGcWDunuO4wn0jBUfzqlzlg9H1pnnbHrZUG2yAF5P0FMq2xb658A+q6hIIsKCPOhEU5IbELwRkEy0XVkkHAgHCsd13Vh2nL37RtEGo+VOHsVdQrHiv22G18VIwsc3AFM8MTBvdFKENKqAcRCBWC6nQtDSdO7bluN7IApMoDrXj2ZjF3X9V8nQlHHm8MAePDk57zAGOHextzmgNOTaKJ7AsMRJipVNx8zafWF48Pyknpbu8/GHCSGIy8ciFTkE9giYxmZ8aPywlx3HdRCjMx1jMr8cA5MiWsOisqAgUtm914lAL3JbDEJ2XEBIq80j50/4lv8xqi4IuZqfmz8e1h98Yh7GwxuYZWqHpSEiVSB7gXBZudsWygGMLZ6DiVWmiZlnwMFDTT65KjtcVnUAD3i8k06bMI5QrH4o/7p8K0dmsCkYUSekZa8zrNaSn2Y7ZcHtBz6czopIZ+pi4Zch7gmazvZMq73WeBZa2sMoF+mPY9DVALbouFHTEOvM9MnjuFj2wAQ8hkn/HQbydktuTk/EbYJJhTLNDS/WZdlUt6Vsd/KA7uOFCDJbx4EEJZ9q197XzWMvqOOgNdAoBkwS985fYs0scJrhzFuOwtqwPQ+7HqMyFxbw1arj8YljYzGBTLWxIexS5PgOQ/8y5yAn2Ypo3PJDL1eKelSknoZQJqGTei4HKt7r2/CjbpgY52mqvBCnJHf250cYfBhahpdkgAbwLl127/mH7+k7el8XtQeUfi7gRR7REaaeweowIMm9brdkueAX300y4CDvhdwGtxx9tv1enTdQVkmYz6faXEJ1aaQvsRcUgGK2fHiHKyBMrfmxJ8vSAfXzb482nuJ0vLAt/u4bls/JsnU39t3th0jnq1EpajDU60BA2Fj5r2P+BeDruHdgvcgW9Mp25Q/Dr6svgy1TNW1RAwICreSNIcdT/cofxGZybCj7dzZR2u1VAgnCaEkC5A/QSln96vyp0tr96A/6xRttat3gbTWYHZj1a9rOHroHPUNUCO7xVeV3O5nKE3D4LMTmnCPRCzXpE8kDJOSmxd9Mydi76dqnQV+mTgIpZUsraRu9Ywvy1As1MaQ7zN7Yy27x6atsIAs0DZDa9mBccVq9xoIjnqfQAzW3YG6C/anUyQSAT59KO9eK8toUvm71qXW6eBILGPb64Bqy0FGmr7UE9EhSAM6MvmD94YhjQZ+fnEuMD+jkQCnNBD3BtOnazSzEB7mFNyHXYhQS3tKouMpqLYPgY9ccoWJkB0ACQqTS59zFaYSL17N14A2B/xpIVHZLaY71JfXsziPX9GzIlbo6UoSQPjOnd31miw33ZQQR6GfT5NmkqlrhIKjURSC3FcOXuetTIbRriPpdruRT1plcGDKTFwbSHb5dMwTvlNZaZ4rnES8gWNsmEdgtY3XNwAQJAychedLhx+4m6koJVuAnTyKN9CLWtsaWqY/ikkszqBl6vKNT9YcpprnQ1gL0HCqAYqAUsoGO8FBABYkkoQuBWZ/tMd5UzFynjfPZyxaYd2rHm8hzDTQXecIuShbe0WrlhO7n86tuAFsdPT3lrhiFFtaqOO7YQK9wRiweyAwlmcppt0DcCn0Ak3L4M0TXY3fLwBGVnXPBCPi3/L3/PX97L7YnsDSzRAFKbMD30tKJWnY6V1rRXgwFj4SOuE0BSBBRG8eThC3+FIu1GxzfQ5IcANoMaDAABN8OuT6J5Pah9G9zjuxDuMCRrKv5iEDQziQotiiywuEhrTynL5hIpnRmEMCI2fRKiiBG7c7lDKzKbns6Xc+4zP+JrAwO62MLAtp6jGcgFu9iCf3HoEznkOVjCiKertDfNoykjl75eVZdR/q6Y51Y+rkrOX0Fta13iF+OpXPMmpmL8uok/HjJbgHkGA82qhd4tQJLgOUEMauDz5AXlca41Q/7hI2z81lgHAccH/+CJTS5km5UW3he+sAayHEQ98p0ONt8fHbgraB3z5WhSAIjEO4lm7fj2XfbpAtH7oxl/zLbrjO8gByImWDH49kIfCkxM0vFMcQfhcBPtcu6R5T0dzc4yDa1Pp8vtTwM0RaqBDAm2nJD9844F+b1L0Vqf3J+U0TQA8N2+yrj2cTAXNKX1MFRIeZkq1Dbxvc8mLOjbZwvt7rM7I4aoqDjkH97OkGl63Y2/PETUCnRn1vAZh05qm69ghU6WlU3tYVepf8G/xCueGLzUUyen9ohYOvSU3T6jN5QUx/33QG8DkrRxFblku3nJ26lU/gnN0a+zJ6gbHp+2CihNZ+Q7T3gITJJlUfUuzQY8NkRNkasLY9agDfgKQFgl0TZiNL2ighvULzRUqv8gHc2Q02mNIrFo+5hAPd7LcVu9n0hZMWWSkhT1acXLzegPD0I1vQIOj2IKQygHxItVfwxsu5zTw9B62aRG5WBnkzEOmL7zsaWrRZ3qHJ4k6SfTN9YXhFUVcyNp8GOGnmn41rH+HDAUScAZjEn/puKL0qqGLGGR3gRhrArR/8CZ3j+sFMSBkv6jYp5zwYPfcdKYByPorjjAPBz+tyaZfX2RNPhgDx3ykGxRPWpML8dJoSTl19iyYzo3ao5L1cbhrz/KaDpnBghrsvsl+cqiFvYO4S52SXa5xpbVxv6XIhny60AOYVxSID04rwzG3JUPl2kp1CUvpdJc0GHmKqqopr43amEUc/RnplvvTsRwXhfyfKn0/XDlfSHnecujgWILuJ8El110ZuKTmVQ4aem89P4MkqInPWQPVXEaaZKbUjbzULWeAq7N/AeBdkRhopaQZDx552TLvq3JZbQuaG5G59UbEiFTi1BmgqG6MBiz1KmOLEnibOgwiG6JJNA1fqLP43OyEvnnO7+kfqStrjpUPw3Cvlqdstj4irBLF5OC78zuqe46gdzp6f1EZbV8LRcrxWkIloDzuRWT2EOd9R5tzbo4ZGM0mvuHZUBRPvIbxBr7w7niDaRSxyTMs64B+8/6exB/Eb8ZDp8pRmlXRaseMjTwRwobFGIVW/bFrjrJhLlPdMSREIie/bGYQhrz7rveCiWhOf/T7L6uOIRhkQuQlR7VGPNrdHvPbaulVzmtrLZqeclssvrXATQm1QZ1vzOC71gz8r7ZMDfNk9TGGozHxxVa19eSrkizB154PXqd2MsiY6wNGu16ItPIx9gxD0zaAY4b1u3pGd8Wth857O0HeM1Hgpv6AFiMii9pLPVG9gbY2RzhOiiU29240HRHA1Fsp0xDww7iA00fTgKz+ZHm8NYGBbGgoFULjkzO1yszQ6ZU7JPnD8LgoRP4gnwwu06yGA9Ezj6N7CEbXGNA9gRBYSbPMBtl3Or5amb+HEtarmwIALfq/KVih0ZiyRoOxV0QK+1/pRft7UuegilyCK7db7/lYfFesLVkPwo1y/S7foQAj0P50+OPlRFljG9iZT8EVNQ/775PBNt+fOhfhKAtSKQ9YXbxPRGskwgQuF6t9dDxfDZs8Hnx6fb+wJWLv1raBIbu8FfvOAyYAYqITd6i8Z3VcBkIbqLbjUPYxg2hsVe3Sx2/gdxB4I6pXuej4UWywOI2AsVQEi9pdQNMoAOd4g8lAnz94iS3FqPfLHaG3EKJGUqzMtvtMRTOygQjJ/UohlOrV9EGWzpgt9IyfvyvfX12p+ywVX/dQgr4JZsvV1Iy4rjvF32TEwkahv7P1xL+SBpbbt6QXzZLyxML6NabRnRmnhhlv36kiIS/cAzDr+8tNupACkNahniIrcYxZ1NMIKKiUyXoQVEIBz+lmUOGYpZ70+Y53u9Hf6Sct6BbHZMcu6Z16tHM18M6dvnBCcMWTLAknqcmPMXrHeCMMN92gNvcnHw+uNqjwfuTXt256IK5hOypnZ3djQaDXFl1/tES2SkJUK3zFeqJnJOLxOY+4N/akWdufDeD9m4mlmoSNBG19UC3mIfiO7EF10xPhMpjS53B5LcdIyyktW502uFJXf2N/UZXUIvFd5djH7r35v7YdQxy/ejycBaEvxGFpRjhl3TLn9zcmnVtJ5NxBvvsaukYi88+e7j4OGAeL/1vVyBpeq2GU9X7C9UnUQreB5RgGx40w7pfAAnvR9T2jIGbwX4YLM6V26MIBxAybCWUB+L8QrXFYZv45jLFnoIS/twRslUccwUJhkwWOLU8a8ZZpvEhUArC8oEvh808aDEffbIb+mq20KRxp+s+ae62S1FbzbT6cHjWudtj1hvCKpB6yF5VwCLQimOLZYJK0ShnRl7wesaLYKzNwg69x8M4j6kEMdv+nVgUltTvU1ltJkatDs14HYtMLtFvtw6ZjbPUa3ubf5GULYg7gm0akpre/qZoyBpedzEOlCdQAWED5eEOFTknE8cBST820kJpjGhWlt9Jb+SsLT3ORfCvLrMBhGeh24PnY6VaRNvXfX6zV6Kmfd07vm3HmYBX13kEdq1ZFut0bDXTh/ILQG80j8zHBU76YZSDlAdRjhQeNzNrKLbI7a47tQ3QClucSpGa7BuNZnax1vnLwJl8mf4vClDTjEmexo2obC0Al7x394v6UMStzKFUBcCNAFSMXpStqpCo2PfBzHoaKH5z2qYebDjHCxBxBYElxdHpYhaPoYpoZgSsPNkV9M9SoDqMLeGYWBSWwP1VxyU88a4FUG1nZDWM/QkHpv6xdiVam6ZYZ0A5bik4OSd/vFOvcA/CVQ8Hf/0jProHlUD/0+bgfWJ7YGxSZqfF3KjONl98HfHLtlGDnI8skb7fKWvIE3ownk+fbx6gWlogcbiBgtLrrEtwD3wvs3uZlvJFMdvtNM1p9PkX5H2xWK7QN+Wc1w6LuoMRVvg/sxmFN02fj19MXB50rgF97Uq7WvVBx/Z7O55zBTCFCL42r51WvLT8JaumjA91Vk0Wn+IXHedMnRS1wPAIkfH6uEYVBlIjVksZAKWyvVQGzq/G+2p4flfRIHkyYct2nbokWpnRBgviAv/XE94sUbb/rl1qC4C4txkLa15EWsB9m5SkDVAjpiqszsZhdwke32peeqUM/h64hgiqO0AmG+t9C8JGD3KwxLzwTYGYNa4nlTIiMECDzqOx6zvwA7D0LIGq7xklmBdv02gr+KW5nXlf1eAjKMJYIBr8lt/KdoYzxnKZz89f3cDgq4Ht8aqdCKp4A65qgtXpWuneTGEbr0whIh6gWYWvwUbZ776eRf0UHnCxCLXqxPvJVmV32CWEyfyTtKArR51I8gKAFe1pyqyF9YHljh4+jWbOCmpx/3vjN6VsHLnIQhQvYF6TbqX23NXoDzzQwsM8PRcyZAeQ8QZoNQfsNIKAXjpZW2NmjKZASQ7dFs8dvErFZvSSsgACjrM2eM3C9XeczmgBCZDUXSkKsAPCp/uvYjue4njiJO9hitznwRWmCKmIxcrzMXzYFpo7rNs1FeP0D4MCRHDiL3u85jSeTZObWTZVvFdff0OPnmbsHUwgYxqcjwi+J0XHXIdX5B8BOfw1o0oZTWty469iKrLIhUs/MEHu9VW3CdD0SmBs20GtrCvCjy5AuxTPEZEzUkEV4Hyjherb5eoUdID5wk6w/OB9I/o/SQ3AbgK4CyNWGabpdgvvPfzASwnWIlVQNmZ8KFCl8MchPWsMGfL3+64gJcyKFjpvnC31fSaMlrMNzuwYQSugfoq2XTvMQ1592Q0AalftVNjDBZr3Sr2wCU8QZ35PSD7/quAPhjz02hSvQLIQBAw49LOJExmT9NIrRvB1/qMEMI4kB06UREA1e4St9lApTYjPzpGK/WeTujaivqkUNjAxeKNgdp7tHkJ7sJk04GzFlxaL5KkIFSUoAW8R3EkOd7z9/AYYFgWHKdXL6/+gnV9Q19lZK3ltzI1+MqfNJvglQ7ujSal/eoFDLft/zReiJdCCN9ahdd5UHUgUkriDejxHyMJvIG0fzL+iD9hGs4LBTNJJgXgUoJZZ9X7A3bkDY90NLk5L7I6TQHapV6YUi6cu0YNuCFCg5jgCDBAjKW2LHsfh0MHskFKyAny+ca0KK33rqufFkeAGwz2LtU5srRHIcID+LlHxqm01ooHAniKHxTCj11dw+4akqAfBojkFf9JRIiCP6rCdXl+clWIkGLhFYklKLy8uO6qPnBCRtSb2cCsHGbD6rdi1/WI5+OJ1fulBQ2WB63yxS9ky8xGyS1zHeT06QXQcQpk9Fxfq915fTYxmulARmDCNZvG4J7oCh+41KSJBHIJ87uWhRW2vPPKEh3cGC5fB3J0+QF7hl8i9ORFujiJwWysO/b7cZpqXnF0gZFL2J0FW4wbMk0wWIr8wrDMM3ebo/nm9Ur1TzFbmHK6R/qfOMwlh5dhGrzprfuetNElWfZaalzZq3H/p7JEQYAObrnwr4bDfKtn42oV6RqELN3Sx+kgaed+U44X+kXyhWEww6A2Q2AfbhtYATwURcNrt4RtsSn7s/Z/izfUyx+BOoHFzaNyjpZCKpz2bHqaNBgFxB74f0mX0GxAHm6WhkhjhyZV4J6FEe0R+DM7OvT8+WJYdQXybAajenDNGHh/aHIaje+JoarccN3YxbzWYq1hheZfbX1aU88V5fsq3/xDllphywqEULdaLIWibNTHfNbyBcEqgoq7oXqcykRlqFtD7iignwjxuI32tC0/iMff2DbRcGyl5eAsD+UOjOU2tAV2bTa9UNkjmgOI0qs7KilSOnK9LCQyzhzeMLqhNryMISm4fbc0ptpSjy/M99SkndT8+e2LZbV5uZVqhgay4zFfVGXLZzmEIbE0mcHEKxE5vW+EayxBZO3RuYrjWANk0C11ocqZDmjL3+XjXgdU9K1SwAjmQBNhdmRaZooyUN514c6iSsP3jmN8I1pdyCqlTnXnIPOhejGHrBULKQxRAfQZDLfsBrzxVb0LsJleIQEsSqgy3DNkf7uH8aoU2TBVVrbs4XF1lpTOzxl2tfHRV+ApyCFQ6KvdyIiXcXeje+CIeEqH7vxU+cfUWSsrHbo444b0GpNgup2haTGkcacbQeDLGttBSxmz2BIjFaJf0ndNs+P14ZQpjvGhnjIpviEuvitoMd6ZB1Xy1EeFz7eW9bSEB+X5nMhLcN48qWRIWVYVg7gX55dv9ltLtO7aN8/FaUUTqg9lyBU4+mZm/b8ddpdRLTkXbZUTm7vPP7oKcnC6bESghoR2/4muTBkPO8pwdROoWqbWHK1/qgi/ZLWpASrTIEKsYnxSNcAA77mOwG31+02zZLPYg8IT1XW6veHhj2UN2uHl7ZAovUJIppzpSjdNMlgWSiM6LnpFbl0RGrY7XbhmKLwM5arv3xAP/JEb9HPwh+YhwiAGkL3MpeBC6GPB4aEpZCkmwxmvj955hdMty6vJXv1OFMoG4ptb1iqENwdMfnfFztY0ICFPQUAF94JQD9tIfdmHS2BzQwagEZelT4+HGzPOQgyW8ZlH1zWRZdJiCaxnBMi5VvYWb15jv+1Lr5mbM8Waq2sY7EcLB99hkcgejc5sjgc/eRTgUO7mOHUXCFKNE1zvb5NDSAyiSvokQqU31N937rGhw1RB0Z/LU1JTTqtK2K+iW5aGZH9igCnUgLNKm0dc73G6wUDUNN6bAB+PVtfI7oIcxLABwX5rZ1a5aSCMfocGT++bmYfXdlg1BME/saQklnkQYSvw867kMXF89oa6+KUQ4BteQVcNalz0Oc2CDJaKvbSZRcCdgmJdQn7cFDoaR95KFy10/f2bEuigg4IivJjokftwC9uIwEXuICz9tOQb9w5eFP9IHJgNYZ+vqQNsFhf5BBC5+GKsG/syozCotZP7oeg393Lg5P3psTLcvksGVqL+kxpVTgxXn3mebUChzQMcQcAJeVHs0gzZs5GqsYyjj47lWV/R7rZN8Cdgz1gj0Kko0u7XyQOulWRyIxufIK4zPAMdy+ZV3O43QuXtuIgdfpg8NtnJRgHXWjbi0Ol4Yz/DYob8D6lH0/w/Xry+UBAnnfOL2W4kS60bSFHNsToaJHDSUNVznmmd+4+QikxAmJMm86tvR5NQfRWt6+Pxdr7fElkgetTbgCcp+E5gYiS53ObF+2lvbUgVARmv2X9O1CchzlXNdPfV0kU78McE+trnJa7cMovxfvOKFBPbNaqt84UGlzG0BtCf32ImsiNmbomNN57KmQcSlkeBPsOgPKHHuraOgq0AK4V2ltv79WTkvKw/tI3rvwkujl/QlcWAADileS7/lDOQG1Q5Q2IpUPbIJQgyPPRaTrxBLHTj11f54PHQ/B83n6VyI0BHJtVmu/yQXZCAQP5yQO8XmLqIgVdeUdrEybzfmaZbhTd64CWuh34OKtm9/7UJz9lIYyBJdWtKCAdJ/AMATiB5qYvheMVKkcz1wOwRRFcu8lU5kAQ5ErnlZatxR0EkOjAyOeBA68AMUHG4aIHGe+YGl0/Kmze15ICV7YkjixSP6pPYpScyboh7CJLhCx51d+wLu3bp5s7wuskQ5pGbb53jf2zMP8Sp/NJKJelfICp+aw0DneOq7wqQqZvYfEOU1xtNs1ZOMLdZc8JEhpG/LUHyKyiQEPnILSMJcCfzCYslB6fZXqaM+D0JKYJzV+ud1jceVm+XIrR44uxHGRWoTujdU5/eWBaxY/LqY066B9lfVecYgMTChk5Fkga6Sj99Uy659z6KiytAM/yVAfHUTRYB7I9/etSaLjgFkE43C0MwBUdVVvYgdJKgZ6YmHPZTy3YZWnLseCYKCp7cgmXuDThde5j5LH1XokjScq0QmEY9hXXyGSrrAytPJ+RZum+74cRXM9ZaDORm3TBFrNK5j8Ldd9dq7PB7Rph27YV7BvAUVUYkmB/oq16YIlefcsKtK21rlEcnGJQm335npyj2xwCNtXdQit5bzkgXlBpIJxLNNEHPmxVAaZZ7l0yecKpTpO1H5VmsI6AfcG7hr0imI/qXUwjTtUEyuois/zyUAEWnGBCYRQsFhYxSJI2jY6r3Yi49cmNrPmi9ZMCAUhFYWimj8WF++aCEZ5bVhM47+U2RmUhrkmLe8rmK2zRF9rz1Vs1ow21lk9Izp+ZYUsoZTqwGhyvodwC/6NqUJYwsASY8E0yhEx+X3L0PgMoxuk2/riZD00ElIfFRrj2sX2HLNHd2cO4IofupsGdP/M+049sVrHIloMACPjAkd5y3OeXEKeOZaDYwMsv9yhlq6KsGLI888CMuC2HSAVqI9FEPcS+jZ/m4CICQ8MaDLhcU+pbiFK+o99J6fLgjG8hdBwnFhZjR0a/WbRDPEgyx41nF30zLiEabM+exRak0YeK6acA1Z0IciqIG3XVXy+YCRThU9FaLF+wvkigNw0E2tY9UbGfVH1bPaFUkp5bLb5L4LLSN0mmeT/GHUPmpgRhJayeBUNp2VXmwTVjCiBod5d8WKEKAwYxfWvYDutk97cmAGFiIYtjmw9uGjg9xn88gqhVquLzBEUtOaovYtW9qCsSVoN06bwDAxEe/cLnhmPbN+abwUSdgYRrv1j5yY3QY2TmUPLxq/cnrr381BpS8k6aGyVy0AGaPIhjg78Xt62u661XfDcrwqqu7FaMaekLKl5vasGez1tsFy5iT3IdZjR+08jns0cSa8apW9Ql+rd7KXw+n24KtRTQo6E6Ju9aQc6/EQZ8mMC4a372zWOWsHVdIZz9rJubplfF1lXjlYXGQP46Ou0wqxtFGXj0Vh6E0X2QZ4dT6lFqdXanC8ka7VblEOTBy/wXaaOK5AoHZ8KEGLmNSjuImeNShs4INZg8O8zKB2o8cEBeebvh6JgIjqPqKzunTCIqtKpKACnu94yDRahvcnRsvH1F4epqBi7UKnfMkikJdeer0cBkC1IAKtG+HzdhQk8ictk4NaXayUIfPmARqvLkE5RYw1SOWDsP7etcotmS8z7h4VI9GTgo+EQFfItlH5Nm91nQZ6IoxNJm7C1by6E3fQiLtZ9zOc9D4PKOtbd6J9i7NPl5gKR6OFE0F2+PaDxDuhL63ZdPclLy9upqgK8Xt+aNI8rgG8rr9cZYXtwS10rqSsV5ag3SjjnGtBDot5V8jeDeEkdglKIVYeSdaeoHSzjkTsg3N/XYRezlC23tA0wBSXBtlVAdUR2yUp5W0Sy9TRGul4OMA1+h74RP74zfZRHBkyZhmcFMAX+b1uMUrTJhI33qLIK3gmaIZ+m5hdU0ML3JPJfV2z5xTry+328359AXwW5fj0zdAGeXH803HMJKz30w7kcvO6drpVxtlJ5mpXJ91+4A+PJFzBXPGwGtXMCpVsY6j4jJzI0vwFPCdI/F0NudK+Wmcrfn/CXGkSl9clMwgrxg5n1t+KnDEvrOjqIsyVk9O9y+auEzhU9d5CYAH5c64+FqUWbbBCHU4NoV4f4sqWXZjnPdvmhQ/j49IQ98AUvVQLmyohkXOKVCZ0WV4IiB7chOrhy0UV4loMRPBtzjS6TPahgj0cHT9YjS53MpvzO7wRIOdgAxZfngmWYSRZdySfLLW69Y33kBGclFxKADUKnykNfE6KKVY3aKHXylHBSH7wofWnOdpi/e/c5qpm5ZLMIouEwKhqVGpBwRmVcgApUnksAc9c6I2vV2PBCclXf44CBztz2cn+p7URTwgUr4/1/xiCpKIH9+oPj2zw8UE3/1MDr+73qg+Pd7+O/9QPF/9gsS//Sg8b/ytyUw/PLb9fpPDxRjf/VA8YX8iweKSeo3kvp3CQ77/4bgqqb543PJOPz3zwL9vv5SoP/Cx/uvOPqPAr2Qv7f8SZ6/nvz/0y9CUMhvF+LfJc+/+iWh/4by/LPcBOEmIH9piL/v+RfIjcDx3xD0nwzxRv17DRGuhPv916U++/7wI11X/n8C</diagram></mxfile>
|
2102.11582/main_diagram/main_diagram.pdf
ADDED
|
Binary file (34.6 kB). View file
|
|
|
2102.11582/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,222 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In this section, we review concepts important for quantifying uncertainty.
|
| 4 |
+
|
| 5 |
+
**Epistemic Uncertainty** at point $x$ is a quantity which is high for a previously unseen $x$, and decreases when $x$ is added to the training set and the model is updated [@kendall2017uncertainties]. This conforms with using mutual information in Bayesian models and deep ensembles [@kirsch2019batchbald] and feature-space density in deterministic models as surrogates for epistemic uncertainty [@postels2020quantifying] as we examine below---and depicted in [13](#subfig:kendall_viz_gmm_density){reference-type="ref+Label" reference="subfig:kendall_viz_gmm_density"} (also §[12.5](#app:kendall_viz){reference-type="ref" reference="app:kendall_viz"}).
|
| 6 |
+
|
| 7 |
+
**Aleatoric Uncertainty** at point $x$ is a quantity which is high for ambiguous or noisy samples [@kendall2017uncertainties], i.e if multiple labels were to be observed at $x$, aleatoric will be high. It does not decrease with more data---depicted in [14](#subfig:kendall_viz_softmax_entropy){reference-type="ref+Label" reference="subfig:kendall_viz_softmax_entropy"}. Note that aleatoric uncertainty is only meaningful in-distribution, as, by definition, it quantifies the level of ambiguity between the different classes which might be observed for input $x$[^2].
|
| 8 |
+
|
| 9 |
+
**Bayesian Models** [@neal2012bayesian; @mackay1992bayesian] provide a principled way of measuring uncertainty. Starting with a prior distribution $p(\omega)$ over model parameters $\omega$, they infer a posterior $p(\omega|\mathcal{D})$, given the training data $\mathcal{D}$. The predictive distribution $p(y|x, \mathcal{D})$ for a given input $x$ is computed via marginalisation over the posterior: $p(y|x, \mathcal{D}) = \mathbb{E}_{\omega \sim p(\omega|\mathcal{D})}[p(y|x, \omega)]$. Its predictive entropy $\mathbb{H}[Y|x, \mathcal{D}]$ upper-bounds the epistemic uncertainty, where epistemic uncertainty is quantified as the mutual information $\mathbb{I}[Y; \omega|x, \mathcal{D}]$ (*expected information gain*) between parameters $\omega$ and output $y$ [@Gal2016Uncertainty; @smith2018understanding]: $$\begin{align}
|
| 10 |
+
\underbrace{\mathbb{H}[Y|x, \mathcal{D}]}_\text{predictive} = \underbrace{\mathbb{I}[Y; \omega|x, \mathcal{D}]}_\text{epistemic} + \underbrace{\mathbb{E}_{\mathop{\mathrm{p}}( \omega \mathbin{\vert} \mathcal{D} )} [\mathbb{H}[Y|x, \omega]]}_\text{aleatoric (for iD $x$)}.
|
| 11 |
+
\label{eq:BALD}
|
| 12 |
+
\end{align}$$ Predictive uncertainty will be high whenever either epistemic uncertainty is high, or when aleatoric uncertainty is high. The intractability of exact Bayesian inference in deep learning has led to the development of methods for approximate inference [@hinton1993keeping; @hernandez2015probabilistic; @blundell2015weight; @gal2016dropout]. In practice, however, these methods are either unable to scale to large datasets and model architectures, suffer from low uncertainty quality, or require expensive Monte-Carlo sampling.
|
| 13 |
+
|
| 14 |
+
**Deep Ensembles** are an ensemble of neural networks which average the models' softmax outputs. Uncertainty is then estimated as the entropy of this averaged softmax vector. Despite incurring a high computational overhead at training and test time, Deep Ensembles, along with recent extensions [@smith2018understanding; @wen2020batchensemble; @dusenberry2020efficient] form the state-of-the-art in uncertainty quantification in deep learning.
|
| 15 |
+
|
| 16 |
+
**Deterministic Models** produce a softmax distribution $p(y|x, \omega)$, and commonly either the *softmax confidence* $\max_c p(y=c|x, \omega)$ or the *softmax entropy* $\mathbb{H}[Y|x, \omega]$ are used as a measure of uncertainty [@hendrycks2016baseline]. Popular approaches to improve these metrics include pre-processing of inputs and post-hoc calibration methods [@liang2017enhancing; @guo2017calibration], alternative objective functions [@lee2017training; @devries2018learning], and exposure to outliers [@hendrycks2018deep]. However, these methods suffer from several shortcomings including failing to perform under distribution shift [@ovadia2019can], requiring significant changes to the training setup, and assuming the availability of OoD samples during training (which many applications do not have access to).
|
| 17 |
+
|
| 18 |
+
**Feature-Space Distances** [@lee2018simple; @van2020simple; @liu2020simple] and **Feature-Space Density** [@postels2020quantifying; @liu2020energy] offer a different approach for estimating uncertainty in deterministic models. Following the definition above, epistemic uncertainty must decrease when previously unseen samples are added to the training set, and feature-space distance and density methods realise this by estimating distance or density, respectively, to training data in the feature space---see again [13](#subfig:kendall_viz_gmm_density){reference-type="ref+Label" reference="subfig:kendall_viz_gmm_density"}. A previously unseen point with high distance (low density), once added to the training data, will have low distance (high density). Hence, they can be used as a proxy for epistemic uncertainty---under important assumptions about the feature space as detailed below. None of these methods, however, is competitive with the state-of-the-art, Deep Ensembles, in uncertainty quantification, potentially for the reasons discussed next.
|
| 19 |
+
|
| 20 |
+
**Feature Collapse** [@van2020simple] is a reason as to why distance and density estimation in the feature space may fail to capture epistemic uncertainty out of the box: feature extractors might map the features of OoD inputs to iD regions in the feature space [@van2021improving c.f. Figure 2].
|
| 21 |
+
|
| 22 |
+
**Smoothness & Sensitivity** can be encouraged to prevent feature collapse by subjecting the feature extractor $f_\theta$, with parameters $\theta$ to a *bi-Lipschitz constraint*: $$\begin{equation*}
|
| 23 |
+
\label{eq:bi-lipschitz}
|
| 24 |
+
K_L \; d_I(\mathbf{\mathrm{x}}_1, \mathbf{\mathrm{x}}_2) \leq d_F(f_\theta(\mathbf{\mathrm{x}}_1), f_\theta(\mathbf{\mathrm{x}}_2)) \leq K_U \; d_I(\mathbf{\mathrm{x}}_1, \mathbf{\mathrm{x}}_2),
|
| 25 |
+
\end{equation*}$$ for all inputs, $\mathbf{\mathrm{x}}_1$ and $\mathbf{\mathrm{x}}_2$, where $d_I$ and $d_F$ denote metrics for the input and feature space respectively, and $K_L$ and $K_U$ the lower and upper Lipschitz constants [@liu2020simple]. The lower bound ensures *sensitivity* to distances in the input space, and the upper bound ensures *smoothness* in the features, preventing them from becoming too sensitive to input variations, which, otherwise, can lead to poor generalisation and loss of robustness [@van2020simple]. Methods of enouraging bi-Lipschitzness include: **i)** gradient penalty, by applying a two-sided penalty to the L2 norm of the Jacobian [@gulrajani2017improved], and **ii)** spectral normalisation [@miyato2018spectral] in models with residual connections, like ResNets [@he2016deep]. @smith2021convolutional provide in-depth analysis which supports that spectral normalisation leads to bi-Lipschitzness. Compared to the Jacobian gradient penalty used in [@van2020simple], spectral normalisation is significantly faster and has more stable training dynamics. Additionally, using a gradient penalty with residual connection leads to difficulties as discussed in [@liu2020simple].
|
| 26 |
+
|
| 27 |
+
As introduced in §[1](#sec:intro){reference-type="ref" reference="sec:intro"}, we propose to use a deterministic neural network with an appropriately regularized feature-space, using spectral normalization [@liu2020simple], and to disentangle aleatoric and epistemic uncertainty by fitting a GDA after training without any additional steps (no hold-out "OoD" data, feature ensembling, or input pre-processing ala @lee2018simple).
|
| 28 |
+
|
| 29 |
+
**Ensuring Sensitivity & Smoothness.** We ensure sensitivity and smoothness using spectral normalisation in models with residual connections. We make minor changes to the standard ResNet model architecture to further encourage sensitivity without sacrificing accuracy---details in §[9.1](#app:more_model_architecture){reference-type="ref" reference="app:more_model_architecture"}.
|
| 30 |
+
|
| 31 |
+
**Disentangling Epistemic & Aleatoric Uncertainty.** To quantify epistemic uncertainty, we fit a feature-space density estimator after training. We use GDA, a GMM $q(y, z)$ with a single Gaussian mixture component per class, and fit each class component by computing the empirical mean and covariance, per class, of the feature vectors $z=f_\theta(x)$, which are the outputs of the last convolutional layer of the model computed on the training samples $x$. *Note that we do not require OoD data to fit these*. Unlike the Expectation Maximization algorithm, this only requires a single pass through the training set given a trained model.
|
| 32 |
+
|
| 33 |
+
**Evaluation.** At test time, we estimate the epistemic uncertainty by evaluating the marginal likelihood of the feature representation under our density $\mathop{\mathrm{q}}( z ) = \sum_y \mathop{\mathrm{q}}( z|y ) \mathop{\mathrm{q}}( y )$. To quantify aleatoric uncertainty for in-distribution samples, we use the entropy $\mathbb{H}[Y|x, \theta]$ of the softmax distribution $\mathop{\mathrm{p}}( y \mathbin{\vert} x, \theta )$. Note that the softmax distribution thus obtained can be further calibrated using temperature scaling [@guo2017calibration]. Thus, for a given input, a high feature-space density indicates low epistemic uncertainty (iD), and we can trust the aleatoric uncertainty and predictions estimated from the softmax layer. The sample can then be either unambiguous (low softmax entropy) or ambiguous (high softmax entropy). Conversely, a low feature-space density indicates high epistemic uncertainty (OoD), and we cannot trust the predictions. The algorithm and a pseudo-code implementation can be found in §[9.2](#app:implementation){reference-type="ref" reference="app:implementation"}.
|
| 34 |
+
|
| 35 |
+
# Method
|
| 36 |
+
|
| 37 |
+
\
|
| 38 |
+
|
| 39 |
+
Before we can establish our main result, we need to look more closely at Dirichlet-Multinomial distributions. Given a Dirichlet distribution $\operatorname{Dir}(\alpha)$ and a random variable $\mathbf{p}\sim \operatorname{Dir}(\alpha)$, we want to quantify the expected entropy $\mathop{\mathrm{\mathbb{E}}}_{\mathbf{p}\sim \operatorname{Dir}(\alpha)} {\mathop{\mathrm{\mathbb{H}}}_{Y\sim \operatorname{Cat}(\mathbf{p})}[Y]}$ and its variance $\mathop{\mathrm{Var}}_{\mathbf{p}\sim \operatorname{Dir}(\alpha)} {\mathop{\mathrm{\mathbb{H}}}_{Y\sim \operatorname{Cat}(\mathbf{p})}[Y]}$. For this, we need to develop more theory. In the following, $\Gamma$ denotes the Gamma function, $\psi$ denotes the Digamma function, $\psi'$ denotes the Trigamma function.
|
| 40 |
+
|
| 41 |
+
::: {#lemma:dirichlet_basics .lemma}
|
| 42 |
+
**Lemma 2**.
|
| 43 |
+
|
| 44 |
+
1. *The expectation $\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \mathbf{p}_i \right ]$ is given by: $$\begin{align}
|
| 45 |
+
\label{eq:psi_log_expectation}
|
| 46 |
+
\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \mathbf{p}_i \right ] = \psi(\alpha_i) - \psi(\alpha_0).
|
| 47 |
+
\end{align}$$*
|
| 48 |
+
|
| 49 |
+
2. *The covariance $\mathop{\mathrm{Cov}}[ \log \mathbf{p}_i, \log \mathbf{p}_j ]$ is given by $$\begin{align}
|
| 50 |
+
\label{eq:psip_log_expectation}
|
| 51 |
+
\mathop{\mathrm{Cov}}[ \log \mathbf{p}_i,\log \mathbf{p}_j ] = \psi'(\alpha_i) \, \delta_{ij} - \psi'(\alpha_0).
|
| 52 |
+
\end{align}$$*
|
| 53 |
+
|
| 54 |
+
3. *The expectation $\mathop{\mathrm{\mathbb{E}}}_{} \left [ \mathbf{p}_i^n \mathbf{p}_j^m \log \mathbf{p}_i \right ]$ is given by: $$\begin{align}
|
| 55 |
+
\begin{split}
|
| 56 |
+
\label{eq:nm_log_expectation}
|
| 57 |
+
\MoveEqLeft[3] \mathop{\mathrm{\mathbb{E}}}_{} \left [ \mathbf{p}_i^n \mathbf{p}_j^m \log \mathbf{p}_i \right ] \\
|
| 58 |
+
={}& \frac{\alpha_i^{\overline{n}} \, \alpha_j^{\overline{m}}}{\alpha_0^{\overline{n+m}}} \left ( \psi(\alpha_i + n) - \psi(\alpha_0 + n + m) \right ),
|
| 59 |
+
\end{split}
|
| 60 |
+
\end{align}$$ where $i \not =j$, and $n^{\overline{k}}=n \, (n+1) \, \ldots \, (n+k-1)$ denotes the rising factorial.*
|
| 61 |
+
:::
|
| 62 |
+
|
| 63 |
+
::: proof
|
| 64 |
+
*Proof.*
|
| 65 |
+
|
| 66 |
+
1. The Dirichlet distribution is members of the exponential family. Therefore the moments of the sufficient statistics are given by the derivatives of the partition function with respect to the natural parameters. The natural parameters of the Dirichlet distribution are just its concentration parameters $\alpha_i$. The partition function is $$\begin{align}
|
| 67 |
+
A(\alpha) = \sum_{i=1}^{k} \log \Gamma\left(\alpha_{i}\right)-\log \Gamma\left(\alpha_{0}\right),
|
| 68 |
+
\end{align}$$ the sufficient statistics is $T(x) = \log x$, and the expectation $\mathop{\mathrm{\mathbb{E}}}_{} \left [ T \right ]$ is given by $$\begin{align}
|
| 69 |
+
\mathop{\mathrm{\mathbb{E}}}_{} \left [ T_i \right ] = \frac{\partial A(\alpha)}{\partial \alpha_{i}}
|
| 70 |
+
\end{align}$$ as the Dirichlet distribution is a member of the exponential family. Substituting the definitions and evaluating the partial derivative yields $$\begin{align}
|
| 71 |
+
\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \mathbf{p}_i \right ] &= \frac{\partial}{\partial \alpha_{i}} \left [ \sum_{i=1}^{k} \log \Gamma\left(\alpha_{i}\right)-\log \Gamma\left(\sum_{i=1}^{k} \alpha_i\right) \right ] \\
|
| 72 |
+
& = \psi\left(\alpha_{i}\right) - \psi\left(\alpha_{0}\right) \frac{\partial}{\partial \alpha_{i}} \alpha_0,
|
| 73 |
+
\end{align}$$ where we have used that the Digamma function $\psi$ is the log derivative of the Gamma function $\psi(x) = \frac{d}{dx} \ln \Gamma(x)$. This proves [\[eq:psi_log_expectation\]](#eq:psi_log_expectation){reference-type="eqref" reference="eq:psi_log_expectation"} as $\frac{\partial}{\partial \alpha_{i}} \alpha_0 = 1$.
|
| 74 |
+
|
| 75 |
+
2. Similarly, the covariance is obtained using a second-order partial derivative: $$\begin{align}
|
| 76 |
+
\mathop{\mathrm{Cov}}[ T_i, T_j ] = \frac{\partial^2 A(\alpha)}{\partial \alpha_{i} \, \partial \alpha_{i}}.
|
| 77 |
+
\end{align}$$ Again, substituting yields $$\begin{align}
|
| 78 |
+
\mathop{\mathrm{Cov}}[ \log \mathbf{p}_i, \log \mathbf{p}_j ] &= \frac{\partial}{\partial \alpha_{j}} \left [ \psi\left(\alpha_{i}\right) - \psi\left(\alpha_{0}\right) \right ] \\
|
| 79 |
+
&= \psi'\left(\alpha_{i}\right) \delta_{ij} - \psi'\left(\alpha_{0}\right).
|
| 80 |
+
\end{align}$$
|
| 81 |
+
|
| 82 |
+
3. We will make use of a simple reparameterization to prove the statement using [\[eq:psi_log_expectation\]](#eq:psi_log_expectation){reference-type="ref+Label" reference="eq:psi_log_expectation"}. Expanding the expectation and substituting the density $\operatorname{Dir}(\mathbf{p}; \alpha)$, we obtain $$\begin{align}
|
| 83 |
+
& \mathop{\mathrm{\mathbb{E}}}_{} \left [ \mathbf{p}_i^n \mathbf{p}_j^m \log \mathbf{p}_i \right ] = \int \operatorname{Dir}(\mathbf{p}; \alpha) \, \mathbf{p}_i^n \, \mathbf{p}_j^m \, \log \mathbf{p}_i \, d\mathbf{p}\\
|
| 84 |
+
& \quad = \int \frac{\Gamma\left(\alpha_0\right)}{\prod_{i=1}^{K} \Gamma\left(\alpha_{i}\right)} \prod_{k=1}^{K} \mathbf{p}_{k}^{\alpha_{k}-1} \, \mathbf{p}_i^n \, \mathbf{p}_j^m \, \log \mathbf{p}_i \, d\mathbf{p}\\
|
| 85 |
+
\begin{split}
|
| 86 |
+
& \quad = \frac{\Gamma(\alpha_i + n) \Gamma(\alpha_j + m) \Gamma(\alpha_0 + n + m)}
|
| 87 |
+
{\Gamma(\alpha_i) \Gamma(\alpha_j) \Gamma(\alpha_0)} \\
|
| 88 |
+
&\quad \quad \int \operatorname{Dir}(\hat\mathbf{p}; \hat\alpha) \, \hat\mathbf{p}_i^n \, \hat\mathbf{p}_j^m \, \log \hat\mathbf{p}_i \, d\hat\mathbf{p}
|
| 89 |
+
\end{split} \\
|
| 90 |
+
& \quad = \frac{\alpha_i^{\overline{n}} \, \alpha_j^{\overline{m}}}{\alpha_0^{\overline{n+m}}} \mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat\mathbf{p}_i \right ],
|
| 91 |
+
\end{align}$$ where $\hat\mathbf{p}\sim \operatorname{Dir}(\hat\alpha)$ with $\hat\alpha = (\alpha_0, \dots, \alpha_i + n, \ldots, \alpha_j + m, \ldots, \alpha_K)$ and we made use of the fact that $\frac{\Gamma(z+n)}{\Gamma(z)}=z^{\overline{n}}$. Finally, we can apply [\[eq:psi_log_expectation\]](#eq:psi_log_expectation){reference-type="ref+Label" reference="eq:psi_log_expectation"} on $\hat\mathbf{p}\sim \operatorname{Dir}(\hat\alpha)$ to show $$\begin{align}
|
| 92 |
+
\quad = \frac{\alpha_i^{\overline{n}} \, \alpha_j^{\overline{m}}}{\alpha_0^{\overline{n+m}}} \left ( \psi(\alpha_i + n) - \psi(\alpha_0 + n + m) \right ).
|
| 93 |
+
\end{align}$$
|
| 94 |
+
|
| 95 |
+
◻
|
| 96 |
+
:::
|
| 97 |
+
|
| 98 |
+
With this, we can already quantify the expected entropy $\mathop{\mathrm{\mathbb{E}}}_{\mathbf{p}\sim \operatorname{Dir}(\alpha)} {\mathop{\mathrm{\mathbb{H}}}_{Y\sim \operatorname{Cat}(\mathbf{p})}[Y]}$:
|
| 99 |
+
|
| 100 |
+
::: {#lemma:dirichlet_expected_categorical .lemma}
|
| 101 |
+
**Lemma 3**. *Given a Dirichlet distribution and a random variable $\mathbf{p}\sim Dir(\alpha$), the expected entropy $\mathop{\mathrm{\mathbb{E}}}_{\mathbf{p}\sim \operatorname{Dir}(\alpha)} {\mathop{\mathrm{\mathbb{H}}}_{Y\sim \operatorname{Cat}(\mathbf{p})}[Y]}$ of the categorical distribution $Y \sim \operatorname{Cat}(\mathbf{p})$ is given by $$\begin{align}
|
| 102 |
+
\mathop{\mathrm{\mathbb{E}}}_{\mathop{\mathrm{p}}( \mathbf{p} \mathbin{\vert} \alpha )} {\mathop{\mathrm{\mathbb{H}}}[ Y \mathbin{\vert} \mathbf{p} ]} = \psi(\alpha_0+1) - \sum_{y=1}^K \frac{\alpha_i}{\alpha_0} \psi(\alpha_i +1).
|
| 103 |
+
\end{align}$$*
|
| 104 |
+
:::
|
| 105 |
+
|
| 106 |
+
::: proof
|
| 107 |
+
*Proof.* Applying the sum rule of expectations and [\[eq:nm_log_expectation\]](#eq:nm_log_expectation){reference-type="ref+Label" reference="eq:nm_log_expectation"} from [2](#lemma:dirichlet_basics){reference-type="ref+Label" reference="lemma:dirichlet_basics"}, we can write $$\begin{align}
|
| 108 |
+
& \mathop{\mathrm{\mathbb{E}}}_{} {\mathop{\mathrm{\mathbb{H}}}[ Y \mathbin{\vert} \mathbf{p} ]} = \mathop{\mathrm{\mathbb{E}}}_{} \left [ -\sum_{i=1}^K \mathbf{p}_i \log \mathbf{p}_i \right ] = -\sum_i \mathop{\mathrm{\mathbb{E}}}_{} \left [ \mathbf{p}_i \log \mathbf{p}_i \right ] \\
|
| 109 |
+
& \quad \quad = - \sum_i \frac{\alpha_i}{\alpha_0} \left ( \psi(\alpha_i+1) - \psi(\alpha_0+1)\right).
|
| 110 |
+
\end{align}$$ The result follows after rearranging and making use of $\sum_i \frac{\alpha_i}{\alpha_0} = 1$. ◻
|
| 111 |
+
:::
|
| 112 |
+
|
| 113 |
+
With these statements, we can answer a slightly more complex problem:
|
| 114 |
+
|
| 115 |
+
::: {#lemma:log_covariance .lemma}
|
| 116 |
+
**Lemma 4**. *Given a Dirichlet distribution and a random variable $\mathbf{p}\sim\operatorname{Dir}(\alpha)$, the covariance $\mathop{\mathrm{Cov}}[ \mathbf{p}_i^n \log \mathbf{p}_i, \mathbf{p}_j^m \log \mathbf{p}_j ]$ is given by $$\begin{align}
|
| 117 |
+
\MoveEqLeft[3] \mathop{\mathrm{Cov}}[ \mathbf{p}_i^n \log \mathbf{p}_i, \mathbf{p}_j^m \log \mathbf{p}_j ] \\
|
| 118 |
+
\begin{split}
|
| 119 |
+
={}& \frac{\alpha_i^{\overline{n}}\,\alpha_j^{\overline{m}}}{\alpha_0^{\overline{n+m}}}
|
| 120 |
+
\left ((\psi(\alpha_i+n)-\psi(\alpha_0+n+m)) \right. \\
|
| 121 |
+
& (\psi(\alpha_j+m)-\psi(\alpha_0+n+m)) \\
|
| 122 |
+
& \left. - \psi'(\alpha_0+n+m) \right)\\
|
| 123 |
+
& + \frac{\alpha_i^{\overline{n}}\,\alpha_j^{\overline{m}}}{\alpha_0^{\overline{n}}\,\alpha_0^{\overline{m}}}
|
| 124 |
+
(\psi(\alpha_i+n)-\psi(\alpha_0+n)) \\
|
| 125 |
+
& (\psi(\alpha_j+m)-\psi(\alpha_0+n)),
|
| 126 |
+
\end{split}
|
| 127 |
+
\end{align}$$ for $i\not=j$, where $\psi$ is the Digamma function and $\psi'$ is the Trigamma function. Similarly, the covariance $\mathop{\mathrm{Cov}}[ \mathbf{p}_i^n \log \mathbf{p}_i, \mathbf{p}_i^m \log \mathbf{p}_i ]$ is given by $$\begin{align}
|
| 128 |
+
\MoveEqLeft[3] \mathop{\mathrm{Cov}}[ \mathbf{p}_i^n \log \mathbf{p}_i, \mathbf{p}_i^m \log \mathbf{p}_i ] \\
|
| 129 |
+
\begin{split}
|
| 130 |
+
={}& \frac{\alpha_i^{\overline{n+m}}}{\alpha_0^{\overline{n+m}}}
|
| 131 |
+
\left((\psi(\alpha_i+n+m)-\psi(\alpha_0+n+m))^2 \right.\\
|
| 132 |
+
& + \left. \psi'(\alpha_i+n+m) - \psi'(\alpha_0+n+m)\right) \\
|
| 133 |
+
& + \frac{\alpha_i^{\overline{n}}\,\alpha_i^{\overline{m}}}{\alpha_0^{\overline{n}}\,\alpha_0^{\overline{m}}}
|
| 134 |
+
(\psi(\alpha_i+n)-\psi(\alpha_0+n)) \\
|
| 135 |
+
& \quad \quad (\psi(\alpha_i+m)-\psi(\alpha_0+n)).
|
| 136 |
+
\end{split}
|
| 137 |
+
\end{align}$$*
|
| 138 |
+
:::
|
| 139 |
+
|
| 140 |
+
Regrettably, the equations are getting large. By abuse of notation, we introduce a convenient shorthand before proving the lemma.
|
| 141 |
+
|
| 142 |
+
::: definition
|
| 143 |
+
**Definition 5**. We will denote by $$\begin{align}
|
| 144 |
+
\overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{n,m} \right ]} = \psi(\alpha_i+n)-\psi(\alpha_0+n+m),
|
| 145 |
+
\end{align}$$ and use $\overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{n} \right ]}$ for $\overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{n,0} \right ]}$. Likewise, $$\begin{align}
|
| 146 |
+
\overline{\mathop{\mathrm{Cov}}[ \log \hat{\mathbf{p}}_{i}^{n,m}, \log \hat{\mathbf{p}}_{j}^{n,m} ]} = \psi'(\alpha_i + n) \delta_{ij} - \psi'(\alpha_0 + n + m).
|
| 147 |
+
\end{align}$$
|
| 148 |
+
:::
|
| 149 |
+
|
| 150 |
+
This notation agrees with the proof of [\[eq:psi_log_expectation\]](#eq:psi_log_expectation){reference-type="ref+Label" reference="eq:psi_log_expectation"} and [\[eq:psip_log_expectation\]](#eq:psip_log_expectation){reference-type="eqref" reference="eq:psip_log_expectation"} in [2](#lemma:dirichlet_basics){reference-type="ref+Label" reference="lemma:dirichlet_basics"}. With this, we can significantly simplify the previous statements:
|
| 151 |
+
|
| 152 |
+
::: corollary
|
| 153 |
+
**Corollary 6**. *Given a Dirichlet distribution and random variable $\mathbf{p}\sim\operatorname{Dir}(\alpha)$, $$\begin{align}
|
| 154 |
+
\mathop{\mathrm{\mathbb{E}}}_{} \left [ \mathbf{p}_i^n \mathbf{p}_j^m \log \mathbf{p}_i \right ] &= \frac{\alpha_i^{\overline{n}} \, \alpha_j^{\overline{m}}}{\alpha_0^{\overline{n+m}}} \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{n,m} \right ]},
|
| 155 |
+
\end{align}$$ $$\begin{align}
|
| 156 |
+
\MoveEqLeft[3] \mathop{\mathrm{Cov}}[ \mathbf{p}_i^n \log \mathbf{p}_i, \mathbf{p}_j^m \log \mathbf{p}_j ] \\
|
| 157 |
+
\begin{split}
|
| 158 |
+
={}& \frac{\alpha_i^{\overline{n}}\,\alpha_j^{\overline{m}}}{\alpha_0^{\overline{n+m}}}
|
| 159 |
+
\left( \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{n,m} \right ]} \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{j}^{m,n} \right ]} \right. \\
|
| 160 |
+
& \quad \quad \left. \overline{\mathop{\mathrm{Cov}}[ \log \hat{\mathbf{p}}_{i}^{n,m}, \log \hat{\mathbf{p}}_{j}^{n,m} ]} \right) \\
|
| 161 |
+
& + \frac{\alpha_i^{\overline{n}}\,\alpha_j^{\overline{m}}}{\alpha_0^{\overline{n}}\,\alpha_0^{\overline{m}}}
|
| 162 |
+
\overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{n} \right ]} \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{j}^{m} \right ]} \quad \text{for $i \not= j$, and}
|
| 163 |
+
\end{split} \\
|
| 164 |
+
\MoveEqLeft[3] \mathop{\mathrm{Cov}}[ \mathbf{p}_i^n \log \mathbf{p}_i, \mathbf{p}_i^m \log \mathbf{p}_i ] \\
|
| 165 |
+
\begin{split}
|
| 166 |
+
={}& \frac{\alpha_i^{\overline{n+m}}}{\alpha_0^{\overline{n+m}}}
|
| 167 |
+
\left(\overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{n+m} \right ]}^2 \right.\\
|
| 168 |
+
& \left. + \overline{\mathop{\mathrm{Cov}}[ \log \hat{\mathbf{p}}_{i}^{n+m}, \log \hat{\mathbf{p}}_{i}^{n+m} ]} \right)\\
|
| 169 |
+
& + \frac{\alpha_i^{\overline{n}}\,\alpha_i^{\overline{m}}}{\alpha_0^{\overline{n}}\,\alpha_0^{\overline{m}}} \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{n} \right ]}\overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{j}^{m} \right ]}.
|
| 170 |
+
\end{split}
|
| 171 |
+
\end{align}$$*
|
| 172 |
+
:::
|
| 173 |
+
|
| 174 |
+
::: proof
|
| 175 |
+
*Proof of [4](#lemma:log_covariance){reference-type="ref+Label" reference="lemma:log_covariance"}.* This proof applies the well-know formula **(cov)** $\mathop{\mathrm{Cov}}[ X,Y ] = \mathop{\mathrm{\mathbb{E}}}_{} \left [ X \, Y \right ] - \mathop{\mathrm{\mathbb{E}}}_{} \left [ X \right ] \mathop{\mathrm{\mathbb{E}}}_{} \left [ Y \right ]$ once forward and once backward **(rcov)** $\mathop{\mathrm{\mathbb{E}}}_{} \left [ X \, Y \right ] = \mathop{\mathrm{Cov}}[ X,Y ] + \mathop{\mathrm{\mathbb{E}}}_{} \left [ X \right ]\mathop{\mathrm{\mathbb{E}}}_{} \left [ Y \right ]$ while applying [\[eq:nm_log_expectation\]](#eq:nm_log_expectation){reference-type="ref+Label" reference="eq:nm_log_expectation"} several times: $$\begin{align}
|
| 176 |
+
& \mathop{\mathrm{Cov}}[ \mathbf{p}_i^n \log \mathbf{p}_i, \mathbf{p}_j^m \log \mathbf{p}_j ] \\
|
| 177 |
+
\begin{split}
|
| 178 |
+
& \quad \overset{\textbf{cov}}{=} \mathop{\mathrm{\mathbb{E}}}_{} \left [ \mathbf{p}_i^n \log(\mathbf{p}_i) \, \mathbf{p}_j^m \log(\mathbf{p}_j) \right ] \\
|
| 179 |
+
& \quad \quad - \mathop{\mathrm{\mathbb{E}}}_{} \left [ \mathbf{p}_i^n \log \mathbf{p}_i \right ]\mathop{\mathrm{\mathbb{E}}}_{} \left [ \mathbf{p}_j^m \log \mathbf{p}_j \right ]
|
| 180 |
+
\end{split} \\
|
| 181 |
+
\begin{split}
|
| 182 |
+
& \quad = \frac{\alpha_i^{\overline{n}}\,\alpha_j^{\overline{m}}}{\alpha_0^{\overline{n+m}}} \mathop{\mathrm{\mathbb{E}}}_{} \left [ \log (\hat\mathbf{p}_i^{i,j}) \log(\hat\mathbf{p}_j^{i,j}) \right ] \\
|
| 183 |
+
& \quad \quad - \mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat\mathbf{p}_i^{i} \right ]\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \mathbf{p}_j^{j} \right ]
|
| 184 |
+
\end{split} \\
|
| 185 |
+
\begin{split}
|
| 186 |
+
& \quad \overset{\textbf{(rcov)}}{=} \frac{\alpha_i^{\overline{n}}\,\alpha_j^{\overline{m}}}{\alpha_0^{\overline{n+m}}}
|
| 187 |
+
\left ( \mathop{\mathrm{Cov}}[ \log \hat\mathbf{p}_i^{i,j}, \log \hat\mathbf{p}_j^{i,j} ] \right. \\
|
| 188 |
+
& \left. \quad \quad \quad \quad + \mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat\mathbf{p}_i^{i,j} \right ]\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat\mathbf{p}_j^{i,j} \right ] \right ) \\
|
| 189 |
+
& \quad \quad - \frac{\alpha_i^{\overline{n}}\,\alpha_j^{\overline{m}}}{\alpha_0^{\overline{n}}\,\alpha_0^{\overline{m}}} \mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat\mathbf{p}_i^{i} \right ]\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \mathbf{p}_j^{j} \right ],
|
| 190 |
+
\end{split}
|
| 191 |
+
\end{align}$$ where $\mathbf{p}^{i,j} \sim \operatorname{Dir}(\alpha^{i,j})$ with $\alpha^{i,j} = (\ldots, \alpha_i + n, \ldots, \alpha_j +m,\ldots)$. $\mathbf{p}^{i/j}$ and $\alpha^{i/j}$ are defined analogously. Applying [\[eq:psip_log_expectation\]](#eq:psip_log_expectation){reference-type="ref+Label" reference="eq:psip_log_expectation"} and [\[eq:psi_log_expectation\]](#eq:psi_log_expectation){reference-type="ref+Label" reference="eq:psi_log_expectation"} from [2](#lemma:dirichlet_basics){reference-type="ref+Label" reference="lemma:dirichlet_basics"} yields the statement. For $i=j$, the proof follows the same pattern. ◻
|
| 192 |
+
:::
|
| 193 |
+
|
| 194 |
+
Now, we can prove the theorem that quantifies the variance of the entropy of $Y$:
|
| 195 |
+
|
| 196 |
+
::: theorem
|
| 197 |
+
**Theorem 7**. *Given a Dirichlet distribution and a random variable $\mathbf{p}\sim Dir(\alpha)$, the variance of the entropy $\mathop{\mathrm{Var}}_{\mathbf{p}\sim \operatorname{Dir}(\alpha)} {\mathop{\mathrm{\mathbb{H}}}_{Y\sim \operatorname{Cat}(\mathbf{p})}[Y]}$ of the categorical distribution $Y \sim \operatorname{Cat}(\mathbf{p})$ is given by $$\begin{align}
|
| 198 |
+
\MoveEqLeft[2] \mathop{\mathrm{Var}}[ \mathop{\mathrm{\mathbb{H}}}[ Y \mathbin{\vert} \mathbf{p} ] ] & \\
|
| 199 |
+
\begin{split}
|
| 200 |
+
={} & \sum_i \frac{\alpha_i^{\overline{2}}}{\alpha_0^{\overline{2}}} \left(\overline{\mathop{\mathrm{Cov}}[ \log \hat{\mathbf{p}}_{i}^{2}, \log \hat{\mathbf{p}}_{i}^{2} ]} + \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{2} \right ]}^2 \right ) \\
|
| 201 |
+
+{} & \sum_{i\not=j} \frac{\alpha_i \, \alpha_j}{\alpha_0^{\overline{2}}} \left( \overline{\mathop{\mathrm{Cov}}[ \log \hat{\mathbf{p}}_{i}^{1}, \log \hat{\mathbf{p}}_{j}^{1} ]} + \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{1,1} \right ]} \, \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{j}^{1,1} \right ]} \right) \\
|
| 202 |
+
-& \sum_{i,j} \frac{\alpha_i \, \alpha_j}{\alpha_0^{2}} \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{i}^{1} \right ]} \overline{\mathop{\mathrm{\mathbb{E}}}_{} \left [ \log \hat{\mathbf{p}}_{j}^{1} \right ]}.
|
| 203 |
+
\end{split}\label{thm:E6}
|
| 204 |
+
\end{align}$$*
|
| 205 |
+
:::
|
| 206 |
+
|
| 207 |
+
::: proof
|
| 208 |
+
*Proof.* We start by applying the well-known formula $\mathop{\mathrm{Var}}[ \sum_i X_i ]=\sum_{i,j} \mathop{\mathrm{Cov}}[ X_i, X_j ]$ and then apply [4](#lemma:log_covariance){reference-type="ref+Label" reference="lemma:log_covariance"} repeatedly. ◻
|
| 209 |
+
:::
|
| 210 |
+
|
| 211 |
+
\
|
| 212 |
+
Given that we can view an ensemble member as a single deterministic model and vice versa, this provides an intuitive explanation for why single deterministic models report inconsistent and widely varying predictive entropies and confidence scores for OoD samples for which a Deep Ensemble would report high epistemic uncertainty (expected information gain) and high predictive entropy.
|
| 213 |
+
|
| 214 |
+
Assuming that $p(y|x, \omega)$ only depends on $\mathop{\mathrm{p}}( y \mathbin{\vert} x )$ and $\mathop{\mathrm{\mathbb{I}}}[ Y ; \omega \mathbin{\vert} x ]$, we model the distribution of $p(y|x,\omega)$ (as a function of $\omega$) using a Dirichlet distribution $\operatorname{Dir}(\alpha)$ which satisfies: $$\begin{align}
|
| 215 |
+
\mathop{\mathrm{p}}( y \mathbin{\vert} x )&= \frac{\alpha_i}{\alpha_0} \\
|
| 216 |
+
\mathop{\mathrm{\mathbb{H}}}[ Y \mathbin{\vert} x ]- \mathop{\mathrm{\mathbb{I}}}[ Y ; \omega \mathbin{\vert} x ]&= \psi(\alpha_0+1)\\
|
| 217 |
+
&\quad - \sum_{y=1}^K \mathop{\mathrm{p}}( y \mathbin{\vert} x ) \psi(\alpha_0 \,\mathop{\mathrm{p}}( y \mathbin{\vert} x ) +1)..
|
| 218 |
+
\end{align}$$ Then, we can model the softmax distribution using a random variable $\mathbf{p}\sim \operatorname{Dir}(\alpha)$ as: $$\begin{align}
|
| 219 |
+
\mathop{\mathrm{p}}( y \mathbin{\vert} x,\omega )\overset{\approx}{\sim} \operatorname{Cat}(\mathbf{p}).
|
| 220 |
+
\end{align}$$ The variance $\mathop{\mathrm{Var}}[ \mathop{\mathrm{\mathbb{H}}}[ Y \mathbin{\vert} x,\omega ] ]$ of the softmax entropy for different samples $x$ given $\mathop{\mathrm{p}}( y \mathbin{\vert} x )$ and $\mathop{\mathrm{\mathbb{I}}}[ Y ; \omega \mathbin{\vert} x ]$ is then approximated by $\mathop{\mathrm{Var}}[ \mathop{\mathrm{\mathbb{H}}}[ Y \mathbin{\vert} \mathbf{p} ] ]$: $$\begin{align}
|
| 221 |
+
\mathop{\mathrm{Var}}_\omega[\mathop{\mathrm{\mathbb{H}}}[ Y \mathbin{\vert} x,\omega ]] \approx \mathop{\mathrm{Var}}_\mathbf{p}[\mathop{\mathrm{\mathbb{H}}}[ Y \mathbin{\vert} \mathbf{p} ]]
|
| 222 |
+
\end{align}$$ with the latter term given in eq. [\[thm:E6\]](#thm:E6){reference-type="eqref" reference="thm:E6"}. We empirically find this to provide a lower bound on the true variance $\mathop{\mathrm{Var}}_\omega[\mathop{\mathrm{\mathbb{H}}}[ Y \mathbin{\vert} x,\omega ]]$.
|
2103.14517/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2103.14517/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep learning has accelerated progress in vision and language tasks. *Visual-semantic embeddings* [\[18,](#page-8-0) [9\]](#page-8-1) have allowed zero-shot learning, cross-modal retrieval and generating new descriptions from embeddings. *Image captioning* [\[33\]](#page-8-2) and *visual question answering* (VQA) [\[2\]](#page-8-3) have demonstrated generation of realistic natural language description of images and a great extent of multimodal semantic understanding. The extension to *video captioning* [\[19,](#page-8-4) [32\]](#page-8-5) and *video question answering* (VideoQA) [\[29,](#page-8-6) [20\]](#page-8-7) has enabled further progress because video requires a higher level of reasoning to understand complex events [\[37\]](#page-9-0).
|
| 4 |
+
|
| 5 |
+
VideoQA systems typically have similar architecture focusing on multimodal embeddings/description, temporal attention and localization, multimodal fusion and reasoning. While it is often hard to isolate progress in individual components, there are some clear trends. For instance, custom self-attention and memory mechanisms for fusion and rea-
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1: In VideoQA, a question is associated with Scene B, but it can only be answered by information from Scene A. We generate episode dialog summaries from subtitles and give them as input to our VideoQA system, dispensing with the need for external knowledge.
|
| 10 |
+
|
| 11 |
+
soning [\[24,](#page-8-8) [17,](#page-8-9) [7\]](#page-8-10) are gradually being streamlined by using *transformer* architectures [\[30,](#page-8-11) [16,](#page-8-12) [36\]](#page-9-1); while visual embeddings [\[29\]](#page-8-6) are being replaced by semantic embeddings [\[20\]](#page-8-7) and *text descriptions* by captioning [\[14,](#page-8-13) [3\]](#page-8-14).
|
| 12 |
+
|
| 13 |
+
Datasets are essential for progress in the field, but often introduce bias. For instance, questions from text summaries are less relevant to visual information [\[29\]](#page-8-6); super<span id="page-1-0"></span>vised temporal localization [\[20\]](#page-8-7) biases system design towards two-stage localization→answering [\[21,](#page-8-15) [16\]](#page-8-12); fixed question structure focusing on temporal localization [\[20\]](#page-8-7) often results in mere *alignment* of questions with subtitles and *matching* answers with the discovered context [\[14\]](#page-8-13), providing little progress on the main objective, which is to study the level of understanding.
|
| 14 |
+
|
| 15 |
+
Bias can be removed by removing localization supervision and balancing questions over different aspects of comprehension, for instance visual, textual, or semantic [\[11\]](#page-8-16). However, the requirement of external knowledge, which can be in the form of hints or even ground truth, does not leave much progress in inferring such knowledge from raw data [\[11\]](#page-8-16). Even weakening this requirement to plain text *human-generated summaries* [\[10\]](#page-8-17), still leaves a system unusable in the absence of such data.
|
| 16 |
+
|
| 17 |
+
In many cases, as illustrated in [Figure 1,](#page-0-0) a question on some part of a story may require knowledge that can be recovered from dialog in other parts of the story. However, despite being textual, raw dialog is often informal and repetitive; searching over all available duration of such noisy source is error-prone and impractical. Inspired by the trend of video captioning, we go a step further and apply the same idea to *dialog*: We *summarize* raw dialog, converting it into *text description* for question answering.
|
| 18 |
+
|
| 19 |
+
Our finding is astounding: our dialog summary is not only a valid replacement for human-generated summary in handling questions that require knowledge on a whole story, but it outperforms them by a large margin.
|
| 20 |
+
|
| 21 |
+
Our contributions can be summarized as follows:
|
| 22 |
+
|
| 23 |
+
- 1. We apply *dialog summarization* to video question answering for the first time [\(Subsection 5.1\)](#page-3-0).
|
| 24 |
+
- 2. Building on a modern VideoQA system, we convert all input sources into *plain text description*.
|
| 25 |
+
- 3. We introduce a weakly-supervised *soft temporal attention* mechanism for localization [\(Subsection 6.2\)](#page-4-0).
|
| 26 |
+
- 4. We devise a very simple *multimodal fusion* mechanism that has no hyperparameters [\(Section 7\)](#page-4-1).
|
| 27 |
+
- 5. We set a new state of the art on KnowIT VQA dataset [\[11\]](#page-8-16) and we beat non-expert humans for the first time, working only with raw data [\(Section 8\)](#page-5-0).
|
| 28 |
+
|
| 29 |
+
# Method
|
| 30 |
+
|
| 31 |
+
We address knowledge-based video question answering on TV shows. Each episode is split in *scenes*. For each scene, we are given the *video* (frames) and *dialog* (speaker names followed by subtitle text) and a number of *multiplechoice questions*. Certain questions require high-level understanding of the whole episode or show. Garcia *et al*. [\[10\]](#page-8-17) rely on human-generated *plot summaries* (or *plot* for short), which we use only for comparison. Our objective is to extract the required knowledge from raw data.
|
| 32 |
+
|
| 33 |
+
As shown in Figure [2,](#page-2-0) we first convert inputs into *plain text description*, including both video (by visual recognition) and dialog (by summarization) [\(Section 5\)](#page-3-1). A number of separate *streams* then map text to embeddings, at the level of both *scene* (video and scene dialog summary) and *episode* (episode dialog summary and plot). The ques-
|
| 34 |
+
|
| 35 |
+
<span id="page-2-4"></span><span id="page-2-0"></span>
|
| 36 |
+
|
| 37 |
+
Figure 2: Our VideoQA system converts both video and dialog to text descriptions/summaries, the latter at both scene and episode level. Converted inputs are processed independently in streams, along with the question and each answer, producing a score per answer. Finally, stream embeddings are fused separately per answer and a prediction is made.
|
| 38 |
+
|
| 39 |
+
tion and answers are embedded together with the input text of each stream. A *temporal attention* mechanism localizes relevant intervals from episode inputs. Finally, question answering is addressed both in a *single-stream* [\(Section 6\)](#page-3-2) and a *multi-stream* [\(Section 7\)](#page-4-1) scenario. The latter amounts to *multi-modal fusion*. We begin our discussion with *transformer* networks [\(Section 4\)](#page-2-1), which we use both for dialog summarization and text embeddings in general.
|
| 40 |
+
|
| 41 |
+
The *transformer* [\[31\]](#page-8-24) is a network architecture that allows for efficient pairwise interaction between input elements. Its main component is an *attention* function, which acts as a form of associative memory.
|
| 42 |
+
|
| 43 |
+
*Multi-head attention* is a fusion of several attention functions. The architecture is a stack of multi-head attention, element-wise fully-connected and normalization layers with residual connections. Originally developed for machine translation, it includes an *encoder* and a *decoder* stack. The decoder additionally attends over the output of the encoder stack and is *auto-regressive*, consuming previously generated symbols when generating the next.
|
| 44 |
+
|
| 45 |
+
BERT [\[6\]](#page-8-25) is a transformer bidirectional *encoder* only, mapping a sequence of tokens to a sequence of ddimensional vectors. It is pre-trained on unsupervised tasks including prediction of masked tokens and next sentence, and can be also fine-tuned on supervised downstream tasks. It can take a number of *sentences* as in input, where a sentence is an arbitrary span of contiguous text.
|
| 46 |
+
|
| 47 |
+
We use BERT as the backbone of our model architecture to represent text, using two sentences at a time. Given strings A and B, the input is given as
|
| 48 |
+
|
| 49 |
+
<span id="page-2-3"></span>
|
| 50 |
+
$$tok_k([CLS] + A + [SEP] + B + [SEP]), \tag{1}$$
|
| 51 |
+
|
| 52 |
+
where + is string concatenation and tok<sup>k</sup> is tokenization into k tokens, with zero padding if the input length is less than k and truncation if it is greater. Tokens are represented by WordPiece embeddings [\[28,](#page-8-26) [35\]](#page-9-3), concatenated with *position embeddings* representing their position in the input sequence and *segment embeddings*, where segments correspond to sentences and are defined according to occurrences of the *separator* token [SEP]. The output vector in R d corresponding to token [CLS] is an *aggregated representation* of the entire input sequence and we denote it as
|
| 53 |
+
|
| 54 |
+
<span id="page-2-2"></span>
|
| 55 |
+
$$f(A,B). (2)$$
|
| 56 |
+
|
| 57 |
+
Sentence-BERT [\[26\]](#page-8-27) takes a single sentence as input and is trained by *metric learning* objectives, *e.g*. in a siamese or triplet structure, facilitating efficient sentence similarity search. It is learned by fine-tuning a pre-trained BERT model on supervised semantic textual similarity.
|
| 58 |
+
|
| 59 |
+
BART [\[22\]](#page-8-28) combines a bidirectional *encoder* and an auto-regressive *decoder*. It is pre-trained as an unsupervised denoising autoencoder, *i.e*., corrupting input text and learning to reconstruct the original, and fine-tuned on supervised classification, generation or translation tasks. It is particularly effective on *text generation*, including abstractive dialog, question answering and summarization tasks.
|
| 60 |
+
|
| 61 |
+
Following [\[4\]](#page-8-23), we use sentence-BERT and BART to *segment* and *summarize dialog* respectively.
|
| 62 |
+
|
| 63 |
+
All input sources, *i.e*., *video*, *dialog* and *plot*, are converted into *plain text description* before being used for question answering. Video is first converted into a *scene graph* by a visual recognition pipeline and then to text description by a set of rules. Importantly, although already in textual form, dialog is also converted into text description by *dialog summarization*. The plot, already in text description form, is used as is, but for comparison only: Our main contribution is to replace human-generated plots by automatically generated descriptions.
|
| 64 |
+
|
| 65 |
+
As the main form of human communication, dialog is an essential input source for video understanding and question answering. We use dialog in three ways: *raw dialog* per scene, *dialog summary* per scene and the collection of dialog summary over a whole *episode*.
|
| 66 |
+
|
| 67 |
+
Raw scene dialog As in all prior work, we use the raw dialog associated to the scene of the question, *as is*. Although in textual form, it is *not* a text description. It may still contain more information than dialog summary, which is important to investigate.
|
| 68 |
+
|
| 69 |
+
Scene dialog summary Given the dialog associated to the scene of the question, we convert this input source into text description by *dialog summarization*. Despite being of textual form, dialog is very different from text *description*: conversations are often informal, verbose and repetitive, with few utterances being informative; while a description is a narrative in *third-person* point of view with clear information flow structured in paragraphs [\[4\]](#page-8-23). Identifying the speaking person is also substantial, especially with multiple people in a conversation. Rather than generic document summarization [\[12\]](#page-8-21), we follow a dedicated dialog summarization method [\[4\]](#page-8-23), which blends character names with events in the generated summaries.
|
| 70 |
+
|
| 71 |
+
A dialog is a sequence of *utterances*, each including a *speaker* (character) name and a *sentence* (sequence of tokens). Each utterance is mapped to a vector embedding by Sentence-BERT [\[26\]](#page-8-27). The sequence of embeddings over the entire dialog is segmented according to *topic*, *e.g*. *greetings*, *today's plan*, *etc*. by C99 [\[5\]](#page-8-29), as well as *stage*, *e.g*. *opening*, *intention*, *discussion*, *conclusion* by a *hidden Markov model* (HMM) [\[1\]](#page-8-30). As a result, for each *view* (topic or stage), the dialog is represented by a sequence of *blocks*, each containing several utterances.
|
| 72 |
+
|
| 73 |
+
Given the above structure, the input is re-embedded and the summary is generated using an extension of BART [\[22\]](#page-8-28). In particular, there is one *encoder* per view, mapping each block to an embedding. An LSTM [\[13\]](#page-8-31) follows, aggregating the entire view into one embedding, obtained as its last hidden state. The *decoder* attends over the output of each encoder using a *multi-view attention* layer to weight the contribution of each view. It is *auto-regressive*, using previous tokens from ground truth at training and previously predicted tokens by the encoder at inference.
|
| 74 |
+
|
| 75 |
+
We train the HMM on the dialog sources of our video QA training set; otherwise, we use Sentence-BERT and BART as used/trained by [\[4\]](#page-8-23). Once a scene dialog summary is generated, it is re-embedded by BERT [\[6\]](#page-8-25) like all other input sources, as discussed in [Section 6.](#page-3-2)
|
| 76 |
+
|
| 77 |
+
Episode dialog summary We collect the scene dialog summaries for all scenes of an episode and we concatenate them into an *episode dialog summary*. Assuming that the episode of the scene of the question is known, we make available the associated episode dialog summary for question answering. This is a long input source and requires *temporal attention*, as discussed in [Subsection 6.2.](#page-4-0) Importantly, episode dialog summary is our most important contribution in substituting plot summary by an automatically generated description.
|
| 78 |
+
|
| 79 |
+
As part of our comparison to [\[10\]](#page-8-17), we use publicly available plot summaries[1](#page-3-3) , already in text description form. Assuming that the episode of the scene of the question is known, we make available the associated plot *as is*, to help answering *knowledge-based questions*. A plot is shorter and higher-level than our episode dialog summary, but it is still long enough to require *temporal attention*. It is important to investigate whether we can dispense of such a humangenerated input and how much more information it contains relative to what we can extract automatically.
|
| 80 |
+
|
| 81 |
+
We use a visual recognition pipeline to convert raw input video into text description. Following [\[10\]](#page-8-17), this pipeline comprises four components: *character recognition* [\[27\]](#page-8-32), *place recognition* [\[40\]](#page-9-4), *object relation detection* [\[38\]](#page-9-5), and *action recognition* [\[34\]](#page-9-6). The outputs of these components are character, place, object, relation and action *nodes*. A directed *video scene graph* is generated by collecting all nodes along with edges and then a textual *scene description* is obtained according to a set of predefined rules.
|
| 82 |
+
|
| 83 |
+
As shown in [Figure 2,](#page-2-0) there is one stream per input source, using a transformer to map inputs to embeddings. Following [\[10\]](#page-8-17), we first attempt question answering on each stream alone. In doing so, we learn a linear classifier while fine-tuning the entire transformer representation per stream. Unlike most existing works, this allows adapting to the data at hand, for instance a particular TV show.
|
| 84 |
+
|
| 85 |
+
<span id="page-3-3"></span><sup>1</sup><https://the-big-bang-theory.com/>
|
| 86 |
+
|
| 87 |
+
<span id="page-4-7"></span>We differentiate *scene* from *episode* inputs, as discussed below. In both cases, the given question and candidate answer strings are denoted as q and $a^c$ for $c = 1, \ldots, n_c$ respectively, where $n_c$ is the number of candidate answers.
|
| 88 |
+
|
| 89 |
+
Scene input sources refer to the scene of the question, *i.e.*, raw scene dialog, scene dialog summary or video. The input string is denoted by x. For each $c = 1, ..., n_c$ , we embed x, q and $a^c$ jointly to d-dimensional vector
|
| 90 |
+
|
| 91 |
+
<span id="page-4-4"></span>
|
| 92 |
+
$$y^c := f(x + q, a^c), \tag{3}$$
|
| 93 |
+
|
| 94 |
+
where + is string concatenation and f is BERT (2). A linear classifier with parameters $\mathbf{w} \in \mathbb{R}^d$ , $b \in \mathbb{R}$ yields a score per candidate answer
|
| 95 |
+
|
| 96 |
+
<span id="page-4-2"></span>
|
| 97 |
+
$$z^c := \mathbf{w}^\top \cdot y^c + b. \tag{4}$$
|
| 98 |
+
|
| 99 |
+
The score vector $z:=(z^1,...,z^{n_c})$ is followed by softmax and cross-entropy loss. At training, we use f as pre-trained and we fine-tune it while optimizing W,b on the correct answers of the QA training set. At inference, we predict $\arg\max_c z^c$ .
|
| 100 |
+
|
| 101 |
+
Episode input sources refer to the entire episode of the scene of the question, *i.e.*, *episode dialog summary* and *plot*. Because such input is typically longer than the transformer's maximum sequence length k (1), we split it into overlapping parts in a *sliding window* fashion. Each part contains the question and one answer, so the window length is $w = k - |q| - |a^c|$ . Given an input of length $\ell$ tokens, the number of parts is $n := \left\lceil \frac{\ell - w}{s} \right\rceil + 1$ , where s is the *stride*. Because all inputs in a mini-batch must have the same number of parts $n_p$ to be stacked in a tensor, certain parts are zero-padded if $n < n_p$ and discarded if $n > n_p$ .
|
| 102 |
+
|
| 103 |
+
**Embedding** The input strings of the parts are denoted by $p_j$ for $j=1,\ldots,n_p$ . Each part $p_j$ is combined with each candidate answer $a^c$ separately, yielding the d-dimensional vectors
|
| 104 |
+
|
| 105 |
+
<span id="page-4-5"></span>
|
| 106 |
+
$$y_i^c := f(p_i + q, a^c) \tag{5}$$
|
| 107 |
+
|
| 108 |
+
for $c=1,\ldots,n_c$ and $j=1,\ldots,n_p$ . A classifier with parameters $\mathbf{w}\in\mathbb{R}^d,\,b\in\mathbb{R}$ yields a score per candidate answer c and part j:
|
| 109 |
+
|
| 110 |
+
<span id="page-4-3"></span>
|
| 111 |
+
$$z_j^c := \mathbf{w}^\top \cdot y_j^c + b. \tag{6}$$
|
| 112 |
+
|
| 113 |
+
**Temporal attention** At this point, unlike scene inputs (4), predictions from (6) are not meaningful unless a part j is known, which amounts to *temporal localization* of the part of the input sequence that contains the information needed to answer a question. In TVQA [20] and related
|
| 114 |
+
|
| 115 |
+
work [21, 14, 16], localization ground truth is available, allowing a two-stage localize-then-answer approach. Without such information, the problem is *weakly supervised*.
|
| 116 |
+
|
| 117 |
+
Previous work [10] simply chooses the part j corresponding to the maximum score $z_j^c$ over all answers c and all parts j in (6), which is called hard temporal attention in the following. Such hard decision may be harmful when the chosen j is incorrect, especially when the predicted answer happens to be correct, because then the model may receive arbitrary gradient signals at training. To alleviate this, we follow a soft temporal attention approach.
|
| 118 |
+
|
| 119 |
+
In particular, let S be the $n_p \times n_c$ matrix with elements $z_j^c$ over all answers c and all parts j (6). For each part j, we take the maximum score over answers
|
| 120 |
+
|
| 121 |
+
$$s_j := \max_{c} z_j^c, \tag{7}$$
|
| 122 |
+
|
| 123 |
+
giving rise to a vector $s := (s_1, \ldots, s_{n_p})$ , containing a single best score per part. Then, by soft assignment over the rows of S—corresponding to parts—we obtain a score for each answer c, represented by $score\ vector\ z \in \mathbb{R}^c$ :
|
| 124 |
+
|
| 125 |
+
<span id="page-4-6"></span>
|
| 126 |
+
$$z := \operatorname{softmax}(s/T)^{\top} \cdot S, \tag{8}$$
|
| 127 |
+
|
| 128 |
+
where T is a temperature parameter. With this definition of z, we have a single score vector and we proceed as in (4).
|
| 129 |
+
|
| 130 |
+
Once a separate transformer has been fine-tuned separately for each stream, we combine all streams into a single question answering classifier, which amounts to multimodal fusion. Here, we introduce two new simple solutions.
|
| 131 |
+
|
| 132 |
+
In both cases, we freeze all transformers and obtain d-dimensional embeddings $y^c$ for each candidate answer c and for each stream. For scene inputs, $y^c$ is obtained directly from (3). Episode input streams produce $n_p$ embeddings per answer. Temporal localization is thus required for part selection, similar to single stream training. Again, hard temporal attention amounts to choosing the part with the highest score according to (6): $y^c := y_{j^*}^c$ where $j^* := \arg\max_j(z_j^c)$ and $y_j^c$ is given by (5). Instead, similar to (8), we follow soft temporal attention:
|
| 133 |
+
|
| 134 |
+
$$y^c := \operatorname{softmax}(s/T)^{\top} \cdot Y_c^{emb},$$
|
| 135 |
+
(9)
|
| 136 |
+
|
| 137 |
+
where $Y_c^{emb}$ is a $n_p \times d$ matrix collecting the embeddings $y_j^c$ (5) of all parts j. Finally, for each answer c, the embeddings $y^c$ of all streams are stacked into a $n_s \times d$ embedding matrix $Y_c$ , where $n_s$ is the number of streams.
|
| 138 |
+
|
| 139 |
+
**Multi-stream attention** The columns of $Y_c$ are embeddings of different streams. We weight them according to weights $w_c \in \mathbb{R}^{n_s}$ obtained from $Y_c$ itself, using a *multi-stream attention* block, consisting of two fully connected layers followed by softmax:
|
| 140 |
+
|
| 141 |
+
$$Y_c^{\text{att}} = \text{diag}(w_c) \cdot Y_c. \tag{10}$$
|
| 142 |
+
|
| 143 |
+
<span id="page-5-1"></span>For each answer c, a fully connected layer maps the $d \times n_s$ matrix $Y_c^{\rm att}$ to a scalar score. All $n_c$ scores are followed by softmax and cross-entropy loss, whereby the parameters of all layers are jointly optimized.
|
| 144 |
+
|
| 145 |
+
**Self-attention** Alternatively, $Y_c$ is mapped to $Y_c^{\text{att}} \in \mathbb{R}^{d \times n_s}$ by a single *multi-head self-attention* block, as in transformers [31]:
|
| 146 |
+
|
| 147 |
+
$$Y_c^{\text{att}} = \text{MultiHeadAttention}(Y_c, Y_c, Y_c).$$
|
| 148 |
+
(11)
|
| 149 |
+
|
| 150 |
+
The remaining pipeline is the same as in the previous case.
|
2108.13702/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2108.13702/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Image extrapolation or out-painting refers to the problem of extending an input image beyond its boundaries. While the problem has applications in virtual reality, sharing photos on social media like Instagram, and even generating scenes during game development especially if the scenes are repetitive, it is relatively under-explored compared to the image inpainting counterpart, which has been extensively researched. Image inpainting solutions based on deep networks and generative adversarial networks (GANs), when applied to the out-painting problem, have been shown to yield poor results [@teterwak2019boundless]. This has led to researchers exploring and proposing new solutions to the out-painting problem [@Zhang_2020; @yang2019very; @wang2019wide]. However, the solutions have been mainly restricted to images that involve outdoor domains like natural scenes where the problem is limited to just extending the existing textures for 'stuff' classes like mountains, water, trees [@guo2020spiral; @teterwak2019boundless] or single-object images of classes like faces, flowers, and cars. These methods are not suitable to other domains like traffic scenes and indoor scenes where a desirable image extrapolation necessitates 1) extending not only the 'stuff' classes but also the 'things' classes like cars, persons, beds, tables that have very definite structure as well as 2) adding new objects based on the context that were not present in the original image. So, why cannot we use the existing techniques [@Zhang_2020; @yang2019very; @wang2019wide; @teterwak2019boundless] for such domains? The answer is they fail spectacularly by filling the extrapolated region with artifacts (see figures [4](#fig:baselines_city){reference-type="ref" reference="fig:baselines_city"} and [5](#fig:baselines_ade){reference-type="ref" reference="fig:baselines_ade"}). They attempt to extrapolate the image by capturing the low-level statistics like textures and colors from the input image while ignoring the high-level information like object semantics and object co-occurrence relationships. In short, they are limited in their ability to perform satisfactory image extrapolation that demands the creation of new object instances and the extension of multiple objects from diverse classes.
|
| 4 |
+
|
| 5 |
+
We address the shortcomings of the previous works by extrapolating the image in the semantic label map space, which enables us to generate new objects in the extrapolated region. Additionally, semantic label maps belong to a lower dimensional manifold than images, making it easier to extrapolate them. However, just having a semantic label map does not allow us to have control over every instance in the extrapolated image. We propose to generate an estimate of the panoptic label directly from the extrapolated semantic label map, different from [@kirillov2019panoptic; @cheng2020panoptic]. Instance boundary maps obtained from panoptic labels also help in creating crisper boundaries between objects belonging to the same semantic category. Unlike semantic label map to image generation [@park2019semantic; @isola2017image; @wang2018high], we have to maintain texture consistency between the input and the extrapolated regions. To account for this, we propose Instance-aware context normalization (IaCN), which leverages the estimated panoptic label maps to transfer instance-wise average color information as a feature map for texture consistency in the extrapolated parts of the corresponding object instances. In addition, we propose the use of patch co-occurrence discriminator [@park2020swapping] to maintain global texture similarity in input and extrapolated region.
|
| 6 |
+
|
| 7 |
+
Our contributions can be summarized below:
|
| 8 |
+
|
| 9 |
+
- We propose a novel paradigm for image out-painting by extrapolating the image in the semantic label space to generate novel objects in the extrapolated region.
|
| 10 |
+
|
| 11 |
+
- We propose the estimation of panoptic label maps from the extrapolated semantic label maps to facilitate the generation of high quality object boundaries in the extrapolated image.
|
| 12 |
+
|
| 13 |
+
- We propose Instance-aware Context Normalization (IaCN) and the use of patch co-occurrence discriminator to maintain texture consistency of extrapolated instances.
|
| 14 |
+
|
| 15 |
+
Through extensive experiments on Cityscapes and ADE20K datasets, we show that our method outperforms all previous state-of-the-art methods in terms of FID and similarity in object co-occurrence metrics.
|
| 16 |
+
|
| 17 |
+
<figure id="fig:pipeline">
|
| 18 |
+
<div class="center">
|
| 19 |
+
<embed src="figures/pipeline.pdf" style="width:90.0%" />
|
| 20 |
+
</div>
|
| 21 |
+
<figcaption><strong>Overview of the pipeline:</strong> Stage 1: The input image is fed into a pre-trained segmentation network to obtain its label map. Stage 2: The stage 1 output fed into a network to obtain the extrapolated label map. Stage 3: The extrapolated label map is fed into another network to obtain the panoptic label map. Stage 4: The input image, extrapolated label map and the panoptic label map are used in conjunction with Instance-aware context normalization module to obtain the final extrapolated image.</figcaption>
|
| 22 |
+
</figure>
|
| 23 |
+
|
| 24 |
+
# Method
|
| 25 |
+
|
| 26 |
+
Our goal is to extrapolate a given image $\mathbf{X} \in \mathbb{R}^{h\times w\times c}$ on its periphery using a sequence of deep neural networks. $\mathbf{Y} \in \mathbb{R}^{h_1\times w_1\times c}$ is the extrapolated image where $h_1\geq h$ and $w_1\geq w$. Here, $c$ represents the number of channels corresponding to the image, which is three for an RGB image. The pipeline shown in figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"} involves four major stages:
|
| 27 |
+
|
| 28 |
+
- Image segmentation: Generation of semantic label map from the input image.
|
| 29 |
+
|
| 30 |
+
- Semantic label map extrapolation: Extend periphery in the semantic label space.
|
| 31 |
+
|
| 32 |
+
- Panoptic label estimation: The semantic label map is processed to obtain an apriori estimate of corresponding panoptic label map.
|
| 33 |
+
|
| 34 |
+
- Instance-aware image synthesis: Generation of image from the semantic label map and panoptic label map by leveraging the proposed IaCN module and patch co-occurrence discriminator.
|
| 35 |
+
|
| 36 |
+
Given an image $\mathbf{X}\in\mathbb{R}^{h\times w\times c}$, corresponding one-hot vector for semantic label map $\mathbf{L_{1}}\in\{0,1\}^{h\times w\times c_1}$ can be obtained using state-of-the-art segmentation techniques [@zhao2017pyramid; @tao2020hierarchical; @cheng2020panoptic; @zhang2020resnest; @yu2020context]. For our method, we use PSPNet [@zhao2017pyramid].
|
| 37 |
+
|
| 38 |
+
<figure id="fig:stagewise_result">
|
| 39 |
+
<div class="center">
|
| 40 |
+
<embed src="figures/all_stage.pdf" />
|
| 41 |
+
</div>
|
| 42 |
+
<figcaption><strong>Stage-wise results:</strong> The input (cropped) image is converted to semantic label map in stage 1, which is then extrapolated in stage 2 to form the outpainted semantic label map. Panoptic label maps are generated from this semantic label map in stage 3. The input image, the (outpainted) semantic label map and the panoptic label map are used to synthesize the final image in stage 4.</figcaption>
|
| 43 |
+
</figure>
|
| 44 |
+
|
| 45 |
+
We train a network, dubbed 'Peripheral Object Generation (POGNet)', $G_{\mathbf{S}}$ to semantically extrapolate $\mathbf{L_1}$ and obtain an estimate of the semantic label map, $\mathbf{L_2}$ of the final extrapolated image to be generated. In addition to generating $\mathbf{L_2}$, we also output the corresponding instance boundary channel. Although [@song2018spg] uses input image with semantic label map to generate extrapolated semantic label map, having explicit supervision with ground truth instance boundary map acts as a better regularizer during training for obtaining more precise object shapes. POGNet is trained using a multi-scale discriminator as proposed in [@wang2018high], enabling $G_{\mathbf{S}}$ to capture the object co-occurrence information at various scales.\
|
| 46 |
+
**Adversarial Loss:** Instead of regular GAN loss [@goodfellow2014generative], we use LS-GAN loss [@mao2017least] ($\mathbf{\mathcal{L}}_{GAN}$).
|
| 47 |
+
|
| 48 |
+
**Focal Loss:** We use focal loss to compute the discrepancy between the ground truth semantic label map and the output of the POGNet. By giving higher weight to hard-to-generate object classes, focal loss allows us to generate some of the rare classes. The focal loss between the ground-truth and the output at any location is given as: $$l(z, y) = -y \times log(z)$$ $$\mathcal{L}_{CE}(z, y) = \Sigma_{h,w,c} l(z,y)$$ $$\mathcal{L}_{FL}(z, y) = l(z,y) \times (1-z)^\gamma$$ The final focal loss, $\mathcal{L}_{FL}^{all}$ is given by the sum of focal losses across all locations in the semantic label map. We use the following training objective for semantic label map extrapolation (we show only the generator losses here): $$\begin{equation}
|
| 49 |
+
\label{eqn:stage2_obj}
|
| 50 |
+
\begin{split}
|
| 51 |
+
\mathcal{L}_{gen} = \mathcal{L}_{GAN} + \mathcal{L}_{FM} + \lambda_{FL}\mathcal{L}_{FL}^{all} + \lambda_{CE}\mathcal{L}_{CE},
|
| 52 |
+
\end{split}
|
| 53 |
+
\vspace{-5pt}
|
| 54 |
+
\end{equation}$$ where $\mathcal{L}_{CE}$ is the cross-entropy loss between the ground-truth instance boundary and the corresponding output channel in POGNet and $\mathcal{L}_{FM}$ is the discriminator feature matching loss. More details can be found in Section [11.1](#sec:stage2_obj){reference-type="ref" reference="sec:stage2_obj"}.
|
| 55 |
+
|
| 56 |
+
As mentioned earlier, we wish to estimate the panoptic label maps (for the to-be-generated extrapolated image) that can be leveraged for IaCN module (discussed in [\[sec:iacn\]](#sec:iacn){reference-type="ref" reference="sec:iacn"}) as well as obtain crisp and precise boundaries between the object instances. Traditionally, the panoptic label maps are generated from the images. But how do we estimate panoptic label maps, apriori, without knowing the image itself? We adapt the method elucidated in Panoptic-DeepLab [@cheng2020panoptic] by predicting the class-agnostic pixel-wise instance center maps and off-set maps from the instance centers for objects belonging to 'things' classes, directly from the semantically extrapolated map, i.e the output of POGNet. Specifically, we train a generator-only network that takes in the extrapolated segmentation map and produces heat maps for instance centers and the pixel-wise offsets from the nearest instance center. The center heat-maps and the offset outputs are further processed along with the segmentation map to obtain the instance maps. The ground-truth center maps are represented by Gaussian blobs of standard deviation of 8 pixels, centered at the instance centers. We use $L_2$ loss to compute the instance center loss and $L_1$ loss to compute the offset losses. The final loss for stage 3 is the weighted sum of the center loss and the offset losses.
|
| 57 |
+
|
| 58 |
+
During the test time, we adapt the procedure mentioned by [@cheng2020panoptic] to group the pixels based on the predicted centers and off-sets to form instance masks. The instance masks and the semantic label map (the input to stage 3) are combined by majority voting to obtain the panoptic label map. An expanded version of the details of training of the network and post-processing are provided in Section [8](#sec:stage3_supp){reference-type="ref" reference="sec:stage3_supp"}.
|
| 59 |
+
|
| 60 |
+
This is the final stage (stage 4) which converts the extrapolated semantic label map back into a colored image. This stage takes in input $\mathbf{X'} (\in \mathbb{R}^{h_1 \times w_1 \times c'})$ (Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}), which is concatenation of the extrapolated semantic label map obtained from the second stage, the cropped (input) image, the boundary map obtained using the panoptic label map obtained from the previous stage and the feature map obtained using the proposed Instance-aware Context Normalization. The output is an RGB image $\mathbf{Y} \in \mathbb{R}^{h_1 \times w_1 \times 3}$.
|
| 61 |
+
|
| 62 |
+
This is different from prior conditional GANs problems [@isola2017image; @liu2019learning; @park2019semantic; @wang2018high] since they synthesize RGB images from semantic label maps, but we have to synthesize RGB images from semantic label maps, given some pixel information of the to-be-synthesized RGB image, which is the cropped image in our case. Here, we have to take care of texture consistency in the synthesized image while maintaining an identity mapping from the input image to the corresponding part in the final image.
|
| 63 |
+
|
| 64 |
+
**Generator**\
|
| 65 |
+
We use SPADE [@park2019semantic] normalization residual blocks for each of the layers in the generator. We use similar learning objective functions, as used in SPADE [@park2019semantic] and pix2pixHD [@wang2018high]: GAN loss with hinge-term [@lim2017geometric; @miyato2018spectral; @zhang2019selfattention] ($\mathcal{L}_{GAN}$), Feature matching loss [@wang2018high] based on the discriminator ($\mathcal{L}_{FM}$) and VGGNet [@simonyan2015deep] for perceptual losses [@DB16c; @johnson2016perceptual] ($\mathcal{L}_{VGG}$)
|
| 66 |
+
|
| 67 |
+
**Instance-aware Context Normalization (IaCN)**\
|
| 68 |
+
[]{#sec:iacn label="sec:iacn"}Outpainting-SRN [@wang2019wide] proposed Context Normalization (CN) to maintain texture consistency between the inside (cropped) region and the outside (outpainted) region. It involves transferring the mean feature or color from the inside region to the outside region. However, we believe that transferring this input mean color directly to the outside region is not suitable for images that have very diverse object instances (like outdoor images, street images).
|
| 69 |
+
|
| 70 |
+
To this end, we propose Instance-aware Context Normalization (IaCN) (Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}), which takes as input the cropped image and the instance map. IaCN module computes the mean color using the input (cropped) image for all the partial instances. Partial instances refer to the instances which get extrapolated in the final image. Since the problem with texture consistency occurs only for partial instances, therefore we compute features only for them. These computed feature maps are then concatenated to the input.
|
| 71 |
+
|
| 72 |
+
<figure id="fig:D_patch" data-latex-placement="t">
|
| 73 |
+
<div class="center">
|
| 74 |
+
<embed src="figures/cooccur_disc.pdf" style="width:90.0%" />
|
| 75 |
+
</div>
|
| 76 |
+
<figcaption><strong>Patch Discriminator</strong>: <span class="math inline"><em>D</em><sub><em>p</em><em>a</em><em>t</em><em>c</em><em>h</em></sub></span> takes in input 4 reference patches, a fake patch and a real patch. The reference patches are randomly selected from the real image. The fake patch and real patch are the same patches, randomly selected but made sure that some part of them is inside while other part is outside, from fake image and real image respectively. The discriminator tries to distinguish between fake patch and the real patch, making use of the reference patches. All the patches are of size <span class="math inline">64 × 64</span>.</figcaption>
|
| 77 |
+
</figure>
|
| 78 |
+
|
| 79 |
+
<figure id="fig:baselines_city">
|
| 80 |
+
<div class="center">
|
| 81 |
+
<embed src="figures/cityscapes_baseline.pdf" />
|
| 82 |
+
</div>
|
| 83 |
+
<figcaption><strong>Cityscapes dataset:</strong> Our method is able to generate new objects in the extrapolated region leading to realistic image extrapolation. Except ours and SPGNet, all other methods fail to generate new objects in the extrapolated region.</figcaption>
|
| 84 |
+
</figure>
|
| 85 |
+
|
| 86 |
+
<figure id="fig:baselines_ade">
|
| 87 |
+
<div class="center">
|
| 88 |
+
<embed src="figures/ade_baseline.pdf" />
|
| 89 |
+
</div>
|
| 90 |
+
<figcaption><strong>ADE20K dataset:</strong> Our method is able to generate new objects in the extrapolated region leading to realistic image extrapolation. Only our, all other methods try to copy texture patches from inside region in the extrapolated region.</figcaption>
|
| 91 |
+
</figure>
|
| 92 |
+
|
| 93 |
+
**Discriminators**\
|
| 94 |
+
We propose to use two discriminators, i) a traditional image discriminator (multi-scale discriminator) that attempts to differentiate between the real and the fake image, ii) a patch co-occurrence discriminator similar to [@park2020swapping], which employed it to ensure texture transfer [@karras2020analyzing; @xian2018texturegan] from an input image to the target image to be edited. We employ a similar idea wherein the region in the image that needs to be extrapolated takes the role of the target image (equation [\[eq:swap_1\]](#eq:swap_1){reference-type="ref" reference="eq:swap_1"}). This facilitates consistent texture transfer from the inside region (source) to the extrapolated region (target) (illustrated in Figure [3](#fig:D_patch){reference-type="ref" reference="fig:D_patch"}). $$\begin{equation}
|
| 95 |
+
\label{eq:swap_1}
|
| 96 |
+
\begin{split}
|
| 97 |
+
&\mathcal{L}_{CooccurGAN}\left(G, D_{patch}\right) = \\ &\mathbb{E}_{x,y}[-log(D_{patch}(crop(G(x)), crop(y), crops(y)))]
|
| 98 |
+
\end{split}
|
| 99 |
+
\end{equation}$$ Here $x$ is the input and $y$ is the corresponding ground-truth image. $crop(y)$ function takes a random patch from image $y$ and $crops(y)$ takes 4 random patches from image $y$, which serve as the reference patches.
|
| 100 |
+
|
| 101 |
+
The details of the network architectures for all generators and discriminators for the various stages are provided in Section [7](#sec:net_arch){reference-type="ref" reference="sec:net_arch"}.
|
| 102 |
+
|
| 103 |
+
:::: table*
|
| 104 |
+
::: center
|
| 105 |
+
Method (Bed, Lamp) (Wall, Window) (Bed, Curtain) (Floor, Table) (Wall, Painting)
|
| 106 |
+
----------------- ------------- ---------------- ---------------- ---------------- ------------------ --
|
| 107 |
+
Outpainting-SRN 0.66 0.82 **0.94** 0.77 0.64
|
| 108 |
+
Boundless 0.79 0.82 0.87 0.75 0.76
|
| 109 |
+
Pconv 0.75 0.85 0.83 0.77 0.83
|
| 110 |
+
SPGNet 0.77 0.53 0.51 0.84 0.82
|
| 111 |
+
SPGNet++ 0.79 0.87 0.85 0.81 0.83
|
| 112 |
+
**Ours** **0.82** **0.90** 0.84 **0.87** **0.84**
|
| 113 |
+
:::
|
| 114 |
+
::::
|
| 115 |
+
|
| 116 |
+
**Variational Autoencoder**\
|
| 117 |
+
To ensure appropriate style transfer from the input image, we use an encoder that processes the input image, which is then fed to the generator. We use the encoder used in [@park2019semantic]. This encoder forms a VAE [@kingma2013auto] with the generator. In the objective function, we add a KL-Divergence Loss term [@kingma2013auto] ($\mathcal{L}_{KLD}$).\
|
| 118 |
+
\
|
| 119 |
+
\
|
| 120 |
+
**Final Objective**\
|
| 121 |
+
The training objective is as shown below in equation [\[eqn:stage3_obj\]](#eqn:stage3_obj){reference-type="ref" reference="eqn:stage3_obj"}: $$\begin{equation}
|
| 122 |
+
\label{eqn:stage3_obj}
|
| 123 |
+
\begin{split}
|
| 124 |
+
% \small
|
| 125 |
+
\min\limits_{G} \{ &\mathcal{L}_{GAN} + \lambda_{FM}\mathcal{L}_{FM} + \lambda_{VGG}\mathcal{L}_{VGG} \\ & + \lambda_{KLD}\mathcal{L}_{KLD} + \mathcal{L}_{CooccurGAN} \}
|
| 126 |
+
\end{split}
|
| 127 |
+
\vspace{-5pt}
|
| 128 |
+
\end{equation}$$
|
2201.09871/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-09-28T04:14:34.261Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" etag="NNex47eVfgietKH0jPG5" version="15.3.1" type="device"><diagram id="qh38Bw-0RmpW14H_aMjN" name="Page-1">7V1bk6JIGv01Po5BJoLw2FqXmZjpiY6p7d3ZfZmgNFuZQjAw7dL+9ZsoiZiAJJe8lEM9VEiCCX7nfFfyMjLnm8Nz7G3Xn6MlCkbQWB5G5sMIQtuyyP+k4XhuMCfTc8Mq9pfnJnBpePF/oHNj+rXV3l+i3dV1OIoC7G+vGxdRGKIFvmrz4jh6v77sWxRc33TrrdIbGpeGl4UXoMJl//GXeH1udazc1T8jf7WmdwZGembj0YvTLnZrbxm95+5lPo7MeRxF+Pxpc5ijIBEdFcu5o6eKs7RXfKSPGkSLN5ScBSNzRh8vRiHm6W7/8Iu5c5beBr/Nvv77B3iL/v7fT+k9vnvBPr1H4aYoXH5KxEyOwigkjbM13gSVD5H2iJarvHzTR3pG0Qbh+EgueL/Imop6nRMzbYtR4GH/+zVWXgr5Kusuu8OXyCdPAo2UnGCasoxy0zauu9hF+3iB0m/lpVfTEXSYjrAXrxAudEQ+5H72pekETgOg4L0DRYHpCtSkph/BOJn3jhOjB5NJTwolG6hJCVB2QIQ1eyUfVsmHn1GwjPbJMz8/f07cUuz5oR+uEjgRppeTu2ffKGCN0QFfAxyjnf/Dez1dYJBjb4+j3dknJqe9wF+F5POC4I9i0vAdxdgn3upTemLjL5fJl2fbRDInWVmzkfVAWr75QTCPgii+MGuH4+gNMY1xtA+XJz9i3GJccmd0uMm59Cz9Bmsbc5TM9DvPSWhU0+8K76bgWvXgPqMQxR4mYiCAeZttQGKQAVGKqOmMGT2n4VoeU9rWN6b4GD1Nn62Xr5uXIPzysPrN+NPsyQMyZrRETB/eA4qzrNHr30kEDo3Ae0XBubPnv1YoPPd/2zHC3E9P8W4Z2pIP6ZM0MQllBp/hz4ro0bZ48xu/p0SnskwlNQmjfDJQpmsAVgCf412Z9WRh7c16TsrMJyMqkutsk4/v3vFkthLL9IbwYp3aoMRyvaSX76Igud8sNYo2NXUofvyOzhYP8Jo/1tjGEfZw7niLYp9IITG0qcX9cmmZ7UJv+6/ozPwqqt1mT70FpU6wCOKkBEQ2fOoPRHsAsT2IB01AnA4gtgdRGWrOgFpvqqeLOXXrMb0ZfqKDj/9MhDyewPTwv6dDC07T44dDCsLp4Jg7yGFQEeHflLomtYMKSDtWeNhuBNcNrP5Lph8BO0bobas+8HY3orHrvzr3D8JOrd45HKHQHWJXozAt9U4ydtMhoeQuH1QGRFUvNlRFRNMhwewfVFc1qEPC2R1UtpBb8jpELqgc+ejt3CUkj5EkLz8ZYwNmLef8xZjYtOGSwJyOjvkjNoXJ5UNT9yohmkCaL/WeEFEsz/6OQw80iQKqGNU4hDMr7I2sQKBzFk2ZSPJm27wmom114WFCbcfMETFpMIWl5pSJZ+Q4mFhPWVsrymYc7TpQQDVn6e8QnXhQRmiCn03on/sDYHINJ2gJZ02/BdMmGl1Yj+4Q8NQEPNmQSF0iHoej0DOgWoMqa8AN1aByVBH6NMS1rpmyTBOLnfVDATNbmuhKdZZlkzkqC7xRouvkg0QwNqzsgjZRIukAWLkgkRybNHsRFiNqwi/ABmJts5DajgTzC4CuBNuRJ8TM6d6tC50MURf4O5rRpM568NLEnDJFa8kjhB2ON31DENEsiJBZtS4dRdrPG8Cbo0i7VZ6c+1BedlBjFj0KGYS6+mtNx+rDWrV2cj9fwUBUh6MG9g8zKw4tULcusavOTdyhjNAdVPZlmOoxXy7HkPEB1BpQmaS0bG6QXFCHsQj9a6py89u5YpB7C2QxryNdmvD3nuBTwdcHhW5p3KkqSmSD+ravdlRneK6kMWXcOFPjpAnO0wo1/3A4dx4AcSn/ufaVdTCnE9HWob5SpJd1AIChTdtXhXX0E82aroMVGuKsCXxsatXWuLNOQvKY06ySMQR6HQK9Qj1IdaQHjKEq2x1WVjfVozrUT/oviqmunwBjGIbRv66qR5WjLNZnXFQb/2Y+QZMIyoQ9BcAF7yt5vhwwug65yRKnq6QJCq+o8HOmNifPQLg3dqnOyoHRtWrXPynuFOrCEjPSDckwr6iHZIxFUX0oMKx00QOsur1NAcYwREFA6US5tlLvMcDapxFWrq1A0ook/GEUuNcwirXUssMowFEm4x0Zz06ftAActRgZL4w8HImZXi9O7icxA5LWyWmQmN0p1IVFX6VblK7FvA5TIbjhd+/VnyhG3x0CQgHpm8Q4v3SKQz/TJJtPcRDg4s8EVTaQpQrZj6bnABTLb19D7OPgtPz9b96RKA3LEQGbzQCecViEd9SeoOA1en+8NMxODeTEOor9H1GIveAUlmZkXgTebucvWgUjmZA08TKFnTbajr7LdlpStNMGADwDqQbYy2FvbXSUww7Logt2F47ffydXfEMe3scoAeGAY2+BSSDAsxfH9bYX72sfo5etd/I977G37caHBksVs5I2ixFAJg0pK/gDyFMT6l3lLtUFprRQU1hoCk19aQDqtTSDOelpGzLTZTpiUw/hSg2VEKt0GEHNKAJJZKwPdyFvuAugXovPsrRtvegTS9tCR8Jpy1HR0sSbFFS85AVDqTcR94YBchSJdJVe2V5kkqXHsxtZsrNqovibV7Rc+uGq7WZk/LuNsQWWYlWG2cisdOexksqNPHBLN5orA1fYRnNgWhbkMig12VaqWiqN95WaANaQFLdwK9vAzRKmCBzjvRqZkTz5RtB8eprPXbcBAzPwGu0lUyIzcSLjGDfVTWSOYTSJehqI7HglG0UCtDn0s4sA53PbFixAUDRyUiXIkUrqqrVUgoo5yJEzdZPgzLRtoRIseachU4JTDg6q8rIWHGvmZ6fC+dbUafDvP6uKYH3nhxJFpoWfnfadInb2s40FqNjP8myS042Cj/Z8LlaCijkoOL+wbdMUzEHVfpbjTZ0qP0sHq+jiZXn2BZCrr26FrLXxsjwrqsr1svwi08LL8uxWINfLNhagYi/LszOAXB/RXIKKOSg40nt6cl3TFCpBxV6WZyMDXbysMx2r9rO6RXUZfPr6Wd2qxg1Epoef5RioKNXPNhegaj/LkUlIrRq3kKBaDvIMzpdaNW4uQcV+1v1QVWP1npZnEXOpOutqXzd2hdeNxYlMC0/Ls8a6VE/bXICKPS3PguaaU1C1pxWcYbjuZC6Yg6o9rcZ1Y3YUlHI/C3lWppXLN90rx5Bn1VepGW0DkengZyHPAqty/ewHqxxDnsVMddVaLfws5Fk4VGpG21yCav0s5FqPc5hJmXbDev62MyktW+1MSsi1UOYAeznsredNq4e97zJ+U0Qb7PLFyop30hIQNjUDli5Oycy7+UxuRXhPsgovWOwJ7xJf8mmYe1OCcNVuLHmEQQnCAgEuZp1PvzxkGMcUui8xWvg7PwrJbf5ABI+geM0DCnc+Jg/1NI+IUDxiygrX/FrW+Xg8HqjBrpNSjDNLqcFuLstBDXIYRxHOW9pk6tznaJnsBfv4fw==</diagram></mxfile>
|
2201.09871/main_diagram/main_diagram.pdf
ADDED
|
Binary file (21 kB). View file
|
|
|
2201.09871/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Graph generation is a key problem in a wide range of domains such as molecule generation [\(Samanta](#page-11-0) [et al., 2020;](#page-11-0) [Popova et al., 2019;](#page-11-1) [Li et al., 2018;](#page-10-0) [Kong et al., 2021;](#page-10-1) [Jin et al., 2020\)](#page-10-2) and structure generation [\(Bapst et al., 2019;](#page-9-0) [Thompson et al., 2020\)](#page-11-2). An evaluation metric that is capable of accurately measuring the distance between a set of generated and reference graphs is critical for advancing research on graph generative models (GGMs). This is frequently done by comparing empirical distributions of graph statistics such as orbit counts, degree coefficients, and clustering coefficients through Maximum Mean Discrepancy (MMD) [\(You et al., 2018;](#page-12-0) [Gretton et al., 2006\)](#page-9-1). While these metrics are capable of making a meaningful comparison between generated and real graphs [\(You et al., 2018\)](#page-12-0), this evaluation method yields a metric for each individual statistic. In addition, recent works have further increased the number of metrics by performing MMD directly with node and edge feature distributions [\(Goyal et al., 2020\)](#page-9-2), or on alternative graph statistics such as graph spectra [\(Liao et al., 2019\)](#page-10-3). While this is not an issue provided there is a primary statistic of interest, all metrics are frequently displayed together to approximate generation quality and evaluate GGMs [\(You et al., 2018;](#page-12-0) [Liao et al., 2019\)](#page-10-3). This process makes it challenging to measure progress as the ranking of generative models may vary between metrics. In addition, the computation of the metrics from [You et al.](#page-12-0) [\(2018\)](#page-12-0) can be prohibitively slow [\(Liao et al., 2019;](#page-10-3) [O'Bray et al., 2022\)](#page-11-3), and they are based only on graph structure, meaning they do not incorporate edge and node features. Therefore, they are less applicable in specific domains such as molecule generation where such features are essential. This particular limitation has led to the use of the Neighborhood Subgraph Pairwise Distance kernel (NSPDK) (Costa & Grave, 2010) in GGM evaluation (Goyal et al., 2020; Podda & Bacciu, 2021; Kawai et al., 2019) as it naturally incorporates edge and node features. However, this metric is still unable to incorporate *continuous* features in evaluation (Costa & Grave, 2010). Faced with a wide array of metrics and ambiguity regarding when each should be the focus, the community needs robust and scalable *standalone* metrics that can consistently rank GGMs.
|
| 4 |
+
|
| 5 |
+
While less popular, metrics from image generation literature have been successfully utilized in GGM evaluation. These metrics rely on the use of a task-specific neural network to extract meaningful representations of samples, enabling a more straightforward comparison between generated and reference distributions (Preuer et al., 2018; Liu et al., 2019; Thompson et al., 2020). Although these metrics have been validated empirically in the image domain, they are not universally applicable to GGMs. For example, Fréchet Chemnet Distance (Preuer et al., 2018) uses a language model trained on SMILES strings, rendering it unusable for evaluation of GGMs in other domains. Furthermore, a pretrained GNN cannot be applied to datasets with a different number of edge or node labels. Pretraining a GNN for every dataset can be prohibitive, making the use of such metrics in GGM evaluation less appealing than in the more established and standardized image domain.
|
| 6 |
+
|
| 7 |
+
In image generation evaluation, classifiers trained on ImageNet (Deng et al., 2009) are frequently used to extract image embeddings (Bińkowski et al., 2018; Heusel et al., 2017; Kynkäänniemi et al., 2019; Xu et al., 2018; Naeem et al., 2020). While classifiers such as Inception v3 (Szegedy et al., 2016) are consistently used, recent works have investigated the use of randomly-initialized CNNs with no further training (hereafter referred to as *a random network*) in generative model evaluation. Xu et al. (2018); Naeem et al. (2020) found that a random CNN performs similarly to ImageNet classifiers on natural images and is superior outside of the natural image domain. In the graph domain, random GNNs have been shown to extract meaningful features to solve downstream graph tasks without training (Kipf & Welling, 2017; Morris et al., 2019; Xu et al., 2019). However, the applicability of random GNNs for the evaluation of GGMs remains unexplored.
|
| 8 |
+
|
| 9 |
+
In this work, we aim to identify one or more scalar metrics that accurately measures the dissimilarity between two sets of graphs to simplify the ranking of GGMs regardless of domain. We tackle this problem by exploring the use of random GNNs in the evaluation of GGMs using metrics that were developed in the image domain. In addition, we perform objective evaluation of a large number of possible evaluation metrics. We design experiments to thoroughly test each metric on its ability to measure the diversity and fidelity (realism) of generated graphs, as well as their sample and computational efficiency. We study three families of metrics: existing GGM evaluation metrics based on graph statistics and graph kernels, which we call classical metrics; image domain metrics using a random GNN; and image domain metrics using a pretrained GNN. We aim to answer the following questions empirically: (Q1) What are the strengths and limitations of each metric? (Q2)Is pretraining a GNN necessary to accurately evaluate GGMs with image domain metrics? (Q3) Is there a strong scalar and domain-agnostic metric for evaluating and ranking GGMs? Addressing these questions enabled us to reveal several surprising findings that have implications for GGM evaluation in practice. For example, regarding Q1, we identify a failure mode in the classical metrics in that they are poor at measuring the diversity of generated graphs. Consequently, we find several metrics that are more expressive. In terms of Q2, we determine that pretraining is unnecessary to utilize neural-network-based (NN-based) metrics. Regarding Q3, we find two scalar metrics that are appropriate for evaluating and ranking GGMs in certain scenarios; they are scalable, powerful, and can easily incorporate continuous or discrete node and edge features. These findings enable computationally inexpensive and domain-agnostic GGM evaluation.
|
2203.17008/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2203.17008/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep neural network quantization [@binaryconnect; @hwang2014fixed; @dorefa; @lin2016fixed] is a powerful tool for improving the computational efficiency of deep neural networks (DNNs). When being accompanied with the hardware design [@outliernpu; @bitfusion; @tpu], the latency and energy consumption of can be greatly reduced.
|
| 4 |
+
|
| 5 |
+
One problem of quantized models is, however, that they often suffer from the significant drop in accuracy, mainly due to quantization errors [@lin2016fixed]. A popular way to address the problem is to further train or calibrate the model with training data [@choi2018pact; @jacob2018quantization; @rastegari2016xnor; @zhang2018lq; @zhou2017incremental; @dorefa]. During the fine-tuning procedure, the forward pass is performed with quantized values whereas the backpropagation is done with floating-point values to recover the accuracy loss in the initial quantization.
|
| 6 |
+
|
| 7 |
+
Unfortunately, such fine-tuning methods, which assume the full availability of training data at the time of quantization, are often not feasible in reality. Many models are disclosed to public only with their trained weights, and the dataset may contain proprietary, confidential, or sensitive data that fundamentally prohibit sharing.
|
| 8 |
+
|
| 9 |
+
*Zero-shot quantization* (or data-free quantization) [@outlier; @aciq; @dfq; @choi2020data; @dsg; @gdfq; @zeroq; @qimera; @autorecon] is therefore a necessary technique for quantization. It assumes that only the architecture and the pre-trained weights are available at the time of quantization. Current successful approaches are mainly led by generative approaches [@zeroq; @gdfq; @zaq; @dsg; @autorecon; @qimera]. Using synthetic samples from generators, knowledge distillation [@hinton2015distilling] is applied against full-precision models. It is known that the state-of-the-art methodology achieves almost similar performance to that of the data-driven approaches (i.e., quantization with real samples) for 5-bit fixed-point quantization, and comparable performance on 4-bit fixed-point setting. [@qimera].
|
| 10 |
+
|
| 11 |
+
However, the recipe of the fine-tuning in zero-shot quantization is mainly adopted from common knowledge distillation problems [@hinton2015distilling; @haroush2020knowledge; @cho2019efficacy] that consider neither quantization nor synthetic samples. As in the knowledge distillation, the loss function of the zero-shot quantization is *habitually* built as a combination of the cross-entropy (CE) against the hard label and the Kullback--Leibler (KL) divergence against the full-precision network's output.[^2] It works well in practice, but there studies to introspect the appropriateness of the loss in the context of zero-shot quantization. Moreover, the distribution of synthetic samples can be different from that of the original data. In such a case, they can be considered a type of adversarial samples (also see [20](#fig:samples){reference-type="ref+label" reference="fig:samples"} for examples) and thus, the quantized network produces a huge generalization gap.
|
| 12 |
+
|
| 13 |
+
To our knowledge, we for the first time perform in-depth analyses on the loss surface of the zero-shot quantization problem. Through the analyses, we find several key observations for better quantization. First, quantized models often have difficulty optimizing multiple loss terms, and the loss terms fail to cooperate --- in other words, the angle between the gradients of CE and KL is quite large in many cases. Second, KL usually has a much flatter loss surface than that of CE, having a better potential for generalizability.
|
| 14 |
+
|
| 15 |
+
To this end, we propose a method to address such problems of the zero-shot quantization, called *(All In the Teacher)*. While pursuing a flatter surface of the loss curve, lets the quantized student model get closer to the full-precision teacher model. To be more specific, we exclude CE from the loss, and apply our proposed with KL only. In addition, is designed to grow the gradients of KL in such a manner that a certain portion of weights are guaranteed to be updated in each layer. As a result, the quantized model approaches closer to the full-precision teacher, and our method takes over the state-of-the-art position for various datasets. Our contributions can be summarized as follows:
|
| 16 |
+
|
| 17 |
+
- We analyze the first and second-order loss surfaces, i.e., and Hessian, of the zero-shot quantization problem. To the best of our knowledge, we are the first to closely investigate the loss function in the zero-shot quantization problem.
|
| 18 |
+
|
| 19 |
+
- We identify that the gradients from CE and KL form a large angle from the beginning to the end of the fine-tuning. This implies that the quantized network is .
|
| 20 |
+
|
| 21 |
+
- We analyze the local curvature of the loss surface and observe that the two losses of our interest exhibit a great amount of curvature difference.
|
| 22 |
+
|
| 23 |
+
- We observe that the quantized student suffers from infrequent updates, where only a few layers are changing their integer weights and the remaining layers are stuck below rounding thresholds.
|
| 24 |
+
|
| 25 |
+
- Based on these findings, we propose which excludes the cross-entropy loss, and manipulates the gradients using our proposed method such that the quantized student model can faithfully resemble the full-precision teacher model.
|
| 26 |
+
|
| 27 |
+
- We perform a thorough evaluation of . The results show that outperforms the existing algorithms by a great margin, showing the state-of-the-art performance on the zero-shot quantization problem.
|
| 28 |
+
|
| 29 |
+
# Method
|
| 30 |
+
|
| 31 |
+
In this section, we describe our *AIT (All In the Teacher)* method in detail. From the observations in [3](#sec:diagnosis){reference-type="ref+label" reference="sec:diagnosis"}, we first drop CE term from the loss, and apply a novel to bring the quantized model as close as possible to the full-precision teacher.
|
| 32 |
+
|
| 33 |
+
Motivated by the experiments from [3.2](#sec:generalization){reference-type="ref+label" reference="sec:generalization"}, we ran GDFQ [@gdfq] with KL-only loss (i.e., $\delta=1$). However, as will be shown later in [\[tab:ablation\]](#tab:ablation){reference-type="ref+label" reference="tab:ablation"}, the performance severely degrades in all settings. We find an explanation from the experiments of [7](#fig:cossim){reference-type="ref+label" reference="fig:cossim"}. Even toward the end of the training, the direction of the remains consistent, and training for more epochs did not solve the problem. This indicates that the model did not converge at the minima of the KL surface.
|
| 34 |
+
|
| 35 |
+
Another set of experiments shown in [\[fig:count_before,fig:count_after\]](#fig:count_before,fig:count_after){reference-type="ref+label" reference="fig:count_before,fig:count_after"} gives a closer look at the phenomenon. We count the average number of weight parameters that cross the rounding threshold (parameters whose quantized values have changed from the previous step). We make two observations: First, the portion of quantized values crossing the rounding threshold is extremely small. Even when training has not stabilized (epoch 60), only 0.0011% of weights are being updated each step. At a later epoch (350), the phenomenon becomes worse that only four values are updated in a whole epoch, which is only 1.8e-7% of weight updates per step during the epoch. In addition, the changes are extremely unbalanced, where all the updates are only occurring in just three layers.
|
| 36 |
+
|
| 37 |
+
We posit that this is from the quantized training process that constrains integer value updates. During the training, the quantized network internally stores its full-precision values. The parameters are quantized for the forward pass of the backpropagation, and the gradients are applied to the internal full precision values. As the gradient values become smaller after a few epochs of training, the change in the parameters are usually not large enough to cross the threshold, and only a few layers are continuously making changes, stopping the model from moving towards a lower point in the loss surface.
|
| 38 |
+
|
| 39 |
+
##
|
| 40 |
+
|
| 41 |
+
To address the problem of KL-only method, we propose (). Overall, we attempt to dynamically manipulate the gradients of each layer , such that certain number of parameters are guaranteed to update in its integer value. $$\begin{align}
|
| 42 |
+
\theta_{l,k+1} &= \theta_{l,k} - \eta \cdot g_{l,k}.
|
| 43 |
+
\end{align}$$ with , the modified rule is as the following: For the parameters $\theta_{l,k}$, quantized parameters $\theta_{l,k}^q$ and the corresponding gradients $g_{l,k}$ from layer $l$, $$\begin{align}
|
| 44 |
+
\theta_{l,k+1} &= \theta_{l,k} - \eta \cdot g'_{l,k}, \\
|
| 45 |
+
g'_{l,k} &= \kappa_{l} \cdot g_{l,k}, \\
|
| 46 |
+
\kappa_{l} &= \text{arg} \min\limits_{\kappa_l} \lVert \Delta \theta^q_{l,k} - T \rVert, \label{eq:GI_objective} \\
|
| 47 |
+
\Delta \theta^q_{l,k} &= \sum \mathbb{I}(\theta^q_{l,k} \neq \theta^q_{l,k+1}), \\
|
| 48 |
+
T &= \rho \cdot dim(\theta_l),
|
| 49 |
+
\end{align}$$ where $\rho \in [0,1]$ is a predetermined proportion that exceeds the quantization threshold, $\mathbb{I()}$ is the indicator function, and $dim(\theta_l)$ is the number of elements in $\theta_l$.
|
| 50 |
+
|
| 51 |
+
To quickly find an approximate solution, we applied a simple two-step heuristic. Firstly, starting from 1.0, $\kappa_l$ is doubled until $\Delta \theta^q_{l,k} > T$. Then, to satisfy Eq. [\[eq:GI_objective\]](#eq:GI_objective){reference-type="ref" reference="eq:GI_objective"}, $\kappa_l$ is adjusted between $\nicefrac{\kappa}{2}$ and $\kappa$ by binary search. For the sake of computation efficiency, the total number of search steps is limited to five. In addition, to assure early phase stability of the training, we added a warm-up phase for the method. In the warm-up phase, the maximum of $\kappa$ is limited to 128 for more accurate solutions. When the generator requires a separate warm-up, the warm-up phase starts after the generator warm-up ends. Similar to learning rate exponential decay scheduling, we apply the exponential decay to $\rho$. We discuss the sensitivity to this in [5.4](#sec:sensitivity){reference-type="ref+label" reference="sec:sensitivity"}.
|
2204.04303/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-01-16T03:06:13.257Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36" etag="cipsPzLEqzaTjLyC_tNw" version="16.2.7" type="dropbox"><diagram id="yF6ybQoRFlEVae_9733H" name="Page-1">7Zrdc5s4EMD/Gs/cPeARIDA8Onab3EzTS5ubXvsog4w1wcgn5K/+9SeBAPFhx04c2/0gD4ZdsVpWv12sdXr2aL65ZWgxu6chjnsWCDc9e9yzLNP0bPEhJdtc4g6cXBAxEqpBleCRfMdKCJR0SUKc1gZySmNOFnVhQJMEB7wmQ4zRdX3YlMb1WRcoUjOCSvAYoBi3hv1LQj5TUgi04XeYRDM1tecoxRyVg3NBOkMhXWtz2e969ohRyvOz+WaEYxm8Ii65ofc7tKVjDCf8kBvw7f0ntHE+3n8cpsby77/w6tYxfGVmheKleuKe5cbC4E1IVtJrvlWhcP9bSldvON5wA8UkSnr2UIyI8ZRXWnEWqc/MyqQQPDAaLsUCWcAsdMLXSXO8kGUT16Un9uUfwqUheUeXL3cCnG0qlJ/Rd8r2e9Z6gFO7ul3s8bSQpQuUdM46pQk31opPaSWhbI7i9sQPmKU0kRowEkhpk+W2D1mlumdWzSGL0WUSYsmiKdTrGeH4cYECqV2L0iFkMz6PlRpNUhovOR6yQJWETFpdQfnQnNGnMist9bT6CBXucSBSBDMhWGHGiUjuoVLMSRhKB2+ocIVwWaNsIA2ROB7RWCy+9N0OEfamQTmlpnEDD0+m5fPq2VhklpgSbzSRys5bTOeYs60YorSWq0qHKpWwKCVrrfAUsplWc4r7kKp1UWm6KgfiRFWEY6qDdf7qYO3jfMKuul48UC45O49rh2T7RL7uWkb215Txl/GuJ/id0kemNPSvLqXh+VPa/mFe+D9FAt9Q+vQ7g0+UwQPr6jLYaWXwF4LXYhWbq4uTcCi3QDLYMUpTEtQXtLky8loLI8gOIccbwr8KGehD21fX3+RNfQA9dT2W8SoGj7fahfhmScSTy6XOZYmIgjLneoXgm66tjGVXW/2qaW7nGqd0yQIViSj8sNi+vzN9anwYMddY2OiroXamHLEI8wP2SDgstoo7kNGQ6CKikDEcI05W9Q1mFyZqhgdKEl4RadumCByoDlgHFFid6mKCPCzKpr5dbEzj2HXw7SbQedxahjKmy6C8AnP3cpjb/qCBufk6zKGOuQH6wPJPynoDzUOgti7JsOP4fV8/2gg7Le3RBLuNzHDOTPCgRfCD8HyG0rNA7OgI78e3QtU5ph6fEkd4YNF2LontYND4MgBAHeMX1trBwO47DVihV4rOxavX4vUzXjPxnfDNaTU1Vityn6O1xmqF7rG06nB2Mgev6v0PTb9v6dA59ff0ALwMQuiB/YbeGj//18Tv2ZIHD8TUuSpMXdfpe3DXK/7FlA4g7Iv9VnmY52J23xaio5kiLCOhEW97lqpVK3bxSz41vGM7A0GOr9zzs2jyhyOeW7gMqs8/MxMg6yBM0ZzE23z0DMcrLPfXmj7NckNqTbjY6Ip8Xu2HgkwXY9kWMVK5H0+itr7Z+ii39oWOJGGGvVSCYsZMwxlK0qmwVVhNcK5dUxbWZyxvnKDgKcpaF0YjLKKI5fGwoK9OHJBHphnuT2bZQVGarAtUG3jMrxrg+QZKo4ExdeRfuzzlR2djIzvanZGdteXlPz7YsKPP4b5Rn2Nf2evIrYMzBFRJAqo8scDuVGhjYu3ApOoVvhkv+/tfr8DnBLw0O9uX56XdFrsIL/ZzvADVn/0ly0yzndqJzYnaqeKy+u+K/I1f/Y+K/e5/</diagram></mxfile>
|
2204.04303/main_diagram/main_diagram.pdf
ADDED
|
Binary file (24 kB). View file
|
|
|
2204.04303/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
User sessions are ubiquitous in online e-commerce stores. An e-commerce session contains customer interactions with the platform in a continuous period. Within one session, the customer can issue multiple queries and take various actions on the retrieved products for these queries, such as clicking, adding to cart, and purchasing. Sessions are important in many e-commerce applications, *e.g.*, product recommendation [\(Wu et al.,](#page-9-0) [2019a\)](#page-9-0), query recommendation [\(Cucerzan and White,](#page-8-0) [2007\)](#page-8-0), and query understanding [\(Zhang et al.,](#page-9-1) [2020\)](#page-9-1).
|
| 4 |
+
|
| 5 |
+
This paper considers sessions as *semi-structured* data, as illustrated in Figure [1.](#page-0-0) At the higher level, sessions are heterogeneous graphs that contain interactions between items. At the lower level, each
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1: Illustration of a customer session. A session consists of two types of items: queries and products. The customer searched for 3 keywords sequentially and interacted with the products returned by the search engine.
|
| 10 |
+
|
| 11 |
+
graph node has unstructured text descriptions: we can describe queries by search keywords and products by titles, attributes, customer reviews, and other descriptors. Our goal is to simultaneously encode both the graph and text aspects of the session data to understand customer preferences and intents in a session context.
|
| 12 |
+
|
| 13 |
+
Pretraining on semi-structured session data remains an open problem. First, existing works on learning from session data usually treat a session as a sequence or a graph [\(Xu et al.,](#page-9-2) [2019;](#page-9-2) [You](#page-9-3) [et al.,](#page-9-3) [2019;](#page-9-3) [Qiu et al.,](#page-8-1) [2020b\)](#page-8-1). While they can model inter-item relations, they do not capture the rich intra-item semantics when text descriptions are available. Furthermore, these models are usually large neural networks that require massive labeled data to train from scratch. Another line of research utilizes large-scale pretrained language models [\(Lan et al.,](#page-8-2) [2019;](#page-8-2) [Liu et al.,](#page-8-3) [2019;](#page-8-3) [Clark et al.,](#page-8-4) [2020\)](#page-8-4) as text encoders for session items. However, they fail to model the relational graph structure. Several works attempt to improve language models with a graph-structured knowledge base, such as in [\(Liu et al.,](#page-8-5) [2020;](#page-8-5) [Yao et al.,](#page-9-4) [2019;](#page-9-4) [Shen et al.,](#page-8-6) [2020\)](#page-8-6). While adjusting the semantics of entities according to the knowledge graph, they fail to encode general graph structures in sessions.
|
| 14 |
+
|
| 15 |
+
We propose CERES (Graph Conditioned Encoder Representations for Session Data), a pretraining model for semi-structured e-commerce session data, which can serve as a generic session encoder that simultaneously captures both intraitem semantics and inter-item relations. Beyond training a potent language model for intra-item semantics, our model also conditions the language modeling task on graph-level session information, thus encouraging the pretrained model to learn how to utilize inter-item signals. Our model architecture tightly integrates two key components: (1) an *item Transformer encoder*, which captures text semantics of session items; and (2) a *graph conditioned Transformer*, which aggregates and propagates inter-item relations for cross-item prediction. As a result, CERES models the higher-level interactions between items.
|
| 16 |
+
|
| 17 |
+
We have pretrained CERES using 468,199,822 sessions and performed experiments on three session-based tasks: product search, query search, and entity linking. By comparing with publicly available state-of-the-art language models and domain-specific language models trained on alternative representations of session data, we show that CERES outperforms strong baselines on various session-based tasks by large margins. Experiments show that CERES can effectively utilize sessionlevel information for downstream tasks, better capture text semantics for session items, and perform well even with very scarce training examples.
|
| 18 |
+
|
| 19 |
+
We summarize our contributions as follows: 1) We propose CERES , a pretrained model for semistructured e-commerce session data. CERES can effectively encode both e-commerce items and sessions and generically support various sessionbased downstream tasks. 2) We propose a new graph-conditioned transformer model for pretraining on general relational structures on text data. 3) We conducted extensive experiments on a largescale e-commerce benchmark for three sessionrelated tasks. The results show the superiority of CERES over strong baselines, including mainstream pretrained language models and state-ofthe-art deep session recommendation models.
|
| 20 |
+
|
| 21 |
+
A *customer session* is the search log before a final purchase action. It consists of customer-queryproduct interactions: a customer submits search queries obtains a list of products. The customer may take specific actions, including *view* and *purchase* on the retrieved products. Hence, a session contains two types of items: *queries* and *products*, and various relations between them established by customer actions.
|
| 22 |
+
|
| 23 |
+
We define each session as a *relational graph* G = (V, E) that contains all queries and products in a session and their relations. The vertex set V = (Q,P) is partitioned into ordered query set Q and unordered product set P. The queries Q = (q1, . . . , qn) are indexed by order of the customer's searches. The edge set E contains two types of edges: {(q<sup>i</sup> , q<sup>j</sup> ), i < j} are one-directional edges that connect each query to its previous queries; and {q<sup>i</sup> , p<sup>j</sup> , aij} are bidirectional edges that connects the ith query and jth product, if the customer took action aij on product p<sup>j</sup> retrieved by query q<sup>j</sup> .
|
| 24 |
+
|
| 25 |
+
The queries and products are represented by textual descriptions. Specifically, each query is represented by customer-generated search keywords. Each product is represented with a table of textual attributes. Each product is guaranteed to have a product title and description. In this paper, we call "product sequence" as the concatenation of title and description. A product may have additional attributes, such as product type, color, brand, and manufacturer, depending on their specific categories.
|
| 26 |
+
|
| 27 |
+
# Method
|
| 28 |
+
|
| 29 |
+
In this section we present the details of CERES. We first describe our designed session pretraining task in Section [3.1,](#page-1-0) and then describe the model architecture of CERES in Section [3.2.](#page-2-0)
|
| 30 |
+
|
| 31 |
+
Suppose G = (V, E) is a graph on T text items as vertices, v1, . . . , v<sup>T</sup> , each of which is a sequence of text tokens: v<sup>i</sup> = [vi1, . . . , viT<sup>i</sup> ], i = 1, . . . , T. We propose *graph-conditioned masked language modeling* (GMLM), where masked tokens are predicted with both intra-item context and inter-item context:
|
| 32 |
+
|
| 33 |
+
<span id="page-1-1"></span>
|
| 34 |
+
$$p_{\text{GMLM}}(v_{\text{masked}}) = \prod_{j \text{th masked}} \mathbb{P}(v_{ij}|\mathcal{G}, \{v_{ik}\}_{k \text{th unmasked}}),$$
|
| 35 |
+
(1)
|
| 36 |
+
|
| 37 |
+
which encourages the model to leverage information graph-level inter-item semantics efficiently in order to predict masked tokens. To optimize [\(1\)](#page-1-1), we need to learn token-level embeddings that are infused with session-level information, which we introduce in Section [3.2.2.](#page-2-1) Suppose certain tokens in the input sequence of items as masked (detailed below), we optimize the predictions of the masked tokens with cross entropy loss. The pretraining framework is illustrated in Figure [3.](#page-2-2)
|
| 38 |
+
|
| 39 |
+
<span id="page-2-3"></span>
|
| 40 |
+
|
| 41 |
+
Figure 2: Model illustration. CERES first produces intra-item embeddings in the Item Transformer Encoder. Then, the Graph-Conditioned Transformer aggregates and propagates session-level information to obtain interitem embeddings.
|
| 42 |
+
|
| 43 |
+
<span id="page-2-2"></span>
|
| 44 |
+
|
| 45 |
+
Figure 3: Pretraining framework illustration. CERES learns both inter-item and intra-item embeddings for item tokens for Masked LM and Graph-Conditioned Masked LM. In practice, we find it beneficial to optimize both.
|
| 46 |
+
|
| 47 |
+
**Token Masking Strategy.** To mask tokens in long sequences, including product titles and descriptions, we follow (Devlin et al., 2018) and choose 15% of the tokens for masking. For short sequences, including queries and product attributes, there is a 50% probability that a short sequence will be masked, and for those sequences 50% of their tokens are randomly selected for masking.
|
| 48 |
+
|
| 49 |
+
To model the probability in (1), we design two key components in the CERES model: 1) a *Transformer-based item encoder*, which produces token-level intra-item embeddings that contain context information within a single item; and 2) a *graph-conditioned Transformer for session encoding*, which produces session-level embeddings that encodes inter-item relations, and propagates the session information back to the token-level. We illustrate our model architecture in Figure 2.
|
| 50 |
+
|
| 51 |
+
The session item encoder aims to encode intra-item textual information for each item in a session. We design the item encoder based on Transformers, which allows CERES to leverage the expressive power of the self-attention mechanism for modeling domain-specific language in e-commerce sessions. Given an item i, the transformer-based item
|
| 52 |
+
|
| 53 |
+
encoder compute its token embeddings as follows:
|
| 54 |
+
|
| 55 |
+
$$[\mathbf{v}_{i1}, \dots, \mathbf{v}_{iT_i}] = \text{Transformer}_{\text{item}}([v_{i1}, \dots, v_{iT_i}])$$
|
| 56 |
+
|
| 57 |
+
$$\mathbf{v}_i = \text{Pool}([v_{i1}, \dots, v_{iT_i}]),$$
|
| 58 |
+
(2)
|
| 59 |
+
|
| 60 |
+
where $\mathbf{v}_{ij}$ is the embedding of the *j*th token in the *i*th item, and $\mathbf{v}_i$ is the pooled embedding of the *i*th item. At this stage, $\{\mathbf{v}_{ij}\}, \{\mathbf{v}_i\}$ are embeddings that only encode the intra-item information.
|
| 61 |
+
|
| 62 |
+
**Details of Item Encoding.** We detail the encoding method for the two types of items, queries and products, in the following paragraphs.
|
| 63 |
+
|
| 64 |
+
Each query $q_i = [q_{i1}, \ldots, q_{iT_i}]$ is a sequence of tokens generated by customers as search keywords. We add a special token at the beginning of the queries, [SEARCH], to indicate that the sequence represents a customer's search keywords. Then, to obtain the token-level embedding of the queries and the pooled query embedding by taking the embedding of the special token [SEARCH].
|
| 65 |
+
|
| 66 |
+
Each product $p_i$ is a table of K attributes: $p^1,\ldots,p^K$ , where $p^1$ is always the product sequence, which is the concatenation of product title and bullet description. Each attribute $p_i^k = [p_{i1}^k, p_{i2}^k, \ldots]$ starts with a special token [ATTRTYPE], where ATTRTYPE is replaced with the language descriptor of the attribute. Then, the Transformer is used to compute token and sentence embeddings for all attributes. The product embedding is obtained by average pooling of all attribute's sentence embeddings.
|
| 67 |
+
|
| 68 |
+
The Graph-Conditioned Session Transformer aims to infuse intra-item and inter-item information to produce item and token embeddings. For this purpose, we first design a *position-aware* graph neural network (PGNN) to capture the
|
| 69 |
+
|
| 70 |
+
<span id="page-3-0"></span>
|
| 71 |
+
|
| 72 |
+
Figure 4: Illustration of cross-attention over latent conditioning tokens. The item token embeddings perform self-attention as well as cross-attention over latent conditioning tokens, thus incorporating session-level information. Latent conditioning tokens perform self-attention to update their embeddings, but do not attend to item tokens to preserve session-level information.
|
| 73 |
+
|
| 74 |
+
inter-item dependencies in a session graph to produce item embeddings. Then conditioned on the PGNN-learned item embedding, we propose a cross-attention Transformer, which produces infused item and token embeddings for the Graph-Conditioned Masked Language Modeling task.
|
| 75 |
+
|
| 76 |
+
**Position-Aware Graph Neural Network.** We use a GNN to capture inter-item relations. This will allow CERES to obtain item embeddings that encode the information from other locally correlated items in the session. Let $[\mathbf{v}_1, \dots, \mathbf{v}_N]$ denote the item embeddings produced by the intra-item transformer encoder. We treat them as hidden states of nodes in the session graph $\mathcal{G}$ and feed them to the GNN model, obtaining session-level item embeddings $[\mathbf{v}_1^h, \dots, \mathbf{v}_N^h]$ .
|
| 77 |
+
|
| 78 |
+
The items in a session graph are sequential according to the order the customers generated them. To let the GNN model learn of the positional information of items, we train an *item positional embedding* in analogous to positional embedding of tokens. Before feeding the item embeddings to GNN, the pooled item embeddings are added item positional embeddings according to their positions in the session's item sequence. In this way, the item embeddings $\{\mathbf{v}^i\}_{i\in\mathcal{V}}$ are encoded their positional information as well.
|
| 79 |
+
|
| 80 |
+
**Cross-Attention Transformer.** Conditioned on PGNN, we design a *cross-attention transformer* which propagates session-level information in PGNN-produced item embeddings to all tokens to produce token embeddings that are infused with both intra-item and inter-item information.
|
| 81 |
+
|
| 82 |
+
In order to propagate item embeddings to tokens, we treat item embeddings as latent tokens that can be treated as a "part" of item texts. for each item i, we first expand $\mathbf{v}_i^h$ to K latent conditioning tokens by using a multilayer perceptron module to map $\mathbf{v}_i^h$ to K embedding vectors $[\mathbf{v}_{i1}^h,\ldots,\mathbf{v}_{iK}^h]$ of the same size. For each item i, we compute its latent
|
| 83 |
+
|
| 84 |
+
conditioning tokens by averaging all latent tokens in its neighborhood. Suppose N(i) is the set of all neighboring items in the session graph, itself included. In each position, we take the average of the latent token embeddings in N(i) as the kth latent conditioning token, $\overline{\mathbf{v}}_{ik}^h$ , for the ith item. Then, we concatenate the latent conditioning token embeddings and the item token embeddings obtained by the session item encoder:
|
| 85 |
+
|
| 86 |
+
$$[\overline{\mathbf{v}}_{i1}^h, \dots, \overline{\mathbf{v}}_{iK}^h, \mathbf{v}_{i1}, \dots, \mathbf{v}_{iN_i}].$$
|
| 87 |
+
(3)
|
| 88 |
+
|
| 89 |
+
Finally, we compute the token-level embeddings with session information by feeding the concatenated sequence to a shallow *cross-attention* Transformer. The cross-attention Transformer is of the same structure as normal Transformers. The difference is that we prohibit the latent conditioning tokens from attending over original item tokens to prevent the influx of intra-item information potentially diluating session-level information stored in latent conditioning tokens. Illustration of cross-attention Transformer is provided in Figrue 4.
|
| 90 |
+
|
| 91 |
+
We use the embeddings produced by this cross-attention Transformer as the final embeddings for modeling the token probabilities in Equation (1) and learning the masked language modeling tasks. During training, the model is encouraged to learn good token embeddings with the Item Transformer Encoder, as better embeddings $\{\mathbf{v}_{ij}\}_{j=1}^{N_i}$ is necessary to improve the quality of $\{\mathbf{v}_{ij}^c\}_{j=1}^{N_i}$ . The Graph-Conditioned Transformer will be encouraged to produce high-quality session-level embeddings for the GMLM task. Hence, CERES is encouraged to produce high-quality embeddings that unify both intra-item and inter-item information.
|
| 92 |
+
|
| 93 |
+
When finetuning CERES for downstream tasks, we first obtain session-level item embeddings. The session embedding is computed as the average of all item embeddings. To obtain embedding for a single item without session context, such as for retrieved items in recommendation tasks, only the Item Transformer Encoder is used.
|
| 94 |
+
|
| 95 |
+
To measure the relevance of an item to a given session, we first transform the obtained embeddings by separate linear maps. Denote the transformed session embeddings as $\mathbf{s}$ and item embeddings as $\mathbf{y}$ . The similarity between them is computed by cosine similarity $d_{\mathbf{cos}}(\mathbf{s},\mathbf{y})$ . To finetune the model, we optimize a hinge loss on the cosine similarity between sessions and items.
|
2204.10670/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-11-16T10:47:52.877Z" agent="5.0 (Macintosh; Intel Mac OS X 11_4_0) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.9.6 Chrome/89.0.4389.128 Electron/12.0.16 Safari/537.36" etag="zU9ZHiTTgZ9BkdX_Mu2A" version="14.9.6" type="device"><diagram id="rTMrK8GJnSMlXD2jMbDW" name="Page-1">7Vxbc5s4FP41nmkf4gFduDzm4nZ3ZnemM+m2SV86xMi2WgwukNjpr1/JCJCEiG/Yoa3tzBhdEef7JJ1zdMgAXs9X79NgMfs3CUk0AFa4GsCbAQC2bTnsh+c8Fzke9ouMaUpDUanOuKU/ici0RO4jDUmmVMyTJMrpQs0cJ3FMxrmSF6RpslSrTZJIvesimIo7WnXG7TiISKPaZxrms/K5HL8u+IvQ6Uzc2gNuUTAPqspFRjYLwmQp3QuOBvA6TZK8uJqvrknEhVfKpRjQu5bSamApifNtGny5m4z+m3379Dd6mn+LR6Ppwnq+gLjo5imIHsUTi9Hmz6UI0uQxDgnvxRrAq+WM5uR2EYx56ZKBzvJm+TxiKZtdhkE2W9fliQmNouskSlKWjpOYtbhqjlo8yBNJc7KSssRTvCfJnOTpM6tSlgIohC84ZbsCu2WNkI9EvzMJHFiCHAhWTKvOa8GxCyG7XeTobJYjicNLzkiWGkdBltHxxxmNVfGxOu9oVKfY2O644Ie4TN7LZTcrgUqRehYpFbFWkedBOiX5C0+FhbhIOCUvIiMJvmwjy73MS0kU5PRJnVomLMQdPiSUDbkCnjHcUoBv4Jklj+mYiGbyXNB6sl2sUgjZ/tDFameFeBqdMQyDZ6naglfIdhg1tq2XB+cBf7cGEL7YgF0Ug675XWF3AOXdrii/H8nVqbI/yZHXK5LbLjDTZXeSezYcelDlue9ty/POeOL9Jjzx+8UT3LJK7M4TB3pDT6OJc3Ka+L8JTdx+0cS3O1pOgKXtMshyT00SZHVMElumiMSYrUhCVjS/K0vYtdQHS9Vd8ETZQ8YklJfj26QTF9C8IA8HbctKu1esZLucyiWI9128fG0ZhNtrcp2xsjRhZFYyZND0zQBf/yR58LZIN7jKLJ5cZWYQ0WnMqcu4QJjddMXtIsrs0UtRMKdhyJsbDTFV8Z8kcS4sahutuZcm3ysztmCqbKINAAwD4k3GVV2pxFp/urHggOVjFTRpKdnAQICGuJ1uBxlxCLTgyFDM6ZxkW6OoSk9Mc4M1fEy0O8DJxhpOwPeHTZgcA0zlXtE9SPAMkgqS6zVBMkym06KEziipKPluEyXw2iht4f3bSp3aQxFq09Nrpeu4eroDeqUR2QBv1mO2VooY3V5dKTJ5RJ0oF9NLIZnz4zEpCy6y9cS7ZBUAWKzqQnY1LX7LdYStFQ+TgXv1IUiDOV2xWQ+shygZfx+4N+X6UtyRPUFx07KLP2bZcbxqxy695V7TW37aVcfkOORoTQZ4xOB8Y79lAHLVOZ8x3fmr/etoz5h4IWqwp2Pt2fZc1aRHEJg2fM8E6rEOQJDJyyeDClRQwRlUbaK6+kTtBawmr5wMK1RhhWdYVViR5/cOVGzyosmgIhVUdAZVO4D2ewiqyQl1iGv0IL18H4ugFZmNblBkUPoTeBsHP5a3txfLJL/7aD/l9/cXPfOCWn53Oj+wdJcq6wxjq/oAtd8jq//Y5EvrkI67HedUdKwoeC8zsGs6uk06moXUsyNFH6kUwtYBNqivHRdB/+Q2KDb5Cv8UEnrbkrBn8Q++5gjBhzhC3MaZpbb/HpuBJj/on8JAf1sG9uzIHPtw6NoqbzDan4S4BythV57eX5CHPYtitLG2yyJ0ALf00L/X4JbJn1f4XbNFEKt+28+y36DhpFXra/zcEHkcZIsiAHxCV9yG1KzJKaf0wGxZalbkBPOvqCflFx+jdXnJv125DDR+MExN1qVtctpWmd3blyYH3yaUwRnlVn8f1P19PcHZ5PHbhLPdK5xb8NTwf4f59+g4e9DqIcqOyQW4CWXUK5R7NZv109ueoGxywrShfPeVYSyOVdlVTJYc7V0B3+lloR3wrQLTQpoyytCEO5OZlsQl2wmCekRiC4LotAC2BjsBq46ksf7pdTRNC8ocjU6CCW0L6fr0VrAd7WUwx2T0FwBdtSDFeqWLrE2ex1kx15+OMAC2PyxfCKpiOpuzB57yRMTp2ubtJlhcsXnLkh1t3s3RTYJiPbF5AfBBgx/7vrzHAyAanXEnIfKrj6d2fWQL2DHFOklRSjxGaSRtpb/d5NdUXAsPy51LXoPrV89kiqHmG2ndLQGbXBOGILRxISUegJZOH95YfKsF7PaWdPWWX3LhWeuQtUkwp9Fz0YZ1FMwX60IIueRnJHoifO9slKid1HFvNuJxb0pZMUpeGCfpPIjU4qUQJy9HxTjXhRHJ2e59wR51TOOpsT2n5wVlJIpFe0u697owT4M4m7BWZXumIJQVlkkaqt3LzR+C8ffpmqIXmlABP50QwgTc3VFeY0m0Ic0WUSDESuOISjeeREmQywN6MUqQz78ko1xrZI/O5iG+5n+FDlWUj+YPJAz5Y+yq8Qola8PU1edjQ8HqRBPSAsPsUu2RTRDA5pvTnIbH02GP/P7nnoEJXWzDm9/Pcn7V/dpr6nO6orb1fs2M442ddbdDvxT4sZUh/OmVLF6TZdvms+pitbC0F82BYbU4mrlrRMlk7ba6K/qL0g0eeTcdBXDXUWS9Qen8ao9+YGOd8gUslqz/aVCxPtb/egmO/gc=</diagram></mxfile>
|
2204.10670/main_diagram/main_diagram.pdf
ADDED
|
Binary file (26.8 kB). View file
|
|
|
2207.09666/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-03-06T08:15:42.318Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36" etag="BcHnTiYQM-e-uc-V5oLI" version="16.6.6" type="google"><diagram id="sqwHExWl5JOkvlZBc3hN" name="Page-1">7L1Xm6Pm0jX8a3y45yKHQ4HICEQWnDwXOQeR4dd/0NNjT2jbY0/YfvfnbrtHIEBw16qqteou0C8wXa9c73fZrY3i6hcIiNZf4OsvEETA6PH3XLG9X4FCxPsVaZ9H71eBv60w8j1+XQm8rp3yKB4+2XBs22rMu09Xhm3TxOH4yTq/79vl082Stvr0Uzs/ff1E4LcVRuhX8RebOXk0Zq+XhX60NR/nafbhk0Hg9Z3a/7Dx6yGGzI/a5aPPgplfYLpv2/H9q3ql4+ocuw/j8v5A7O+8++uJ9XEzfs0Ot+k/SK6T6ny/LQ+Nz5VrW/2HfH+U2a+m1wt+Pdlx+zACx3l358tw6quN6v2wjI/Po5YsH2Oj88PzveWw/bEuG+vqWAKPl307NVEcvS79eu3AuXAcIMxeF5K2GVm/zqsTH3Rb5+Hx6YbfDMc/N+N1g1dYgPC599i35a+2gM4t8qqi26rtj+UoTvypGl9OYPTHvG2OlSTw634ftvsFgqmX3+OdLwfydWznuB/j9aNVrwPLxW0dj/12bPL6Lg4A7wD8/V6vQCfId+jrmuU34HxAWfYRZmCCeEdgr5B9hWv660f8ZtHjxatR3zbwXXK7cUlZxRhRPMO5Ffi/x39A4CtM/Kmx/sSyv9ryXEgrfxi+1a7E6+JHxmFZ8Ph9y2z0y88XQAB/Fwhp70d5/NvRm7Y5PpTy+/ADrMDvBAKI/AQBCAG/Q79AAIhDX0IABIkPm/4ABMD/IuC/gwCI/DICvG1/HP5B1ifwv2J84C8YH/hD4x+GMF4/YWir8xT/WqgHvhoRr9b8FAzAO+IzOLxu9jtY+A7GRxHsHY5+Yn8Uwr60/wdG9In9f5T5yX+j/0/yfRD+1PdhHHwHg1+aH4XegdhbCAB/Rc93RwEG/ouCn4MCFCTffRoD3swB5Fs0EPhR5keJf83/c8wPg+CfG58g33B/7DtkAAvLdprCMa9QxvSqq6qhkP/BgO9s/I+H+DAGAzAXln7LTARw/r5ppt9GHvsaoHyX7Ix/5pgwibyRnH+l7J9Y50PR4rtb53sH5s+Mw7Is9OIpn/nW+5+/bbQXdvY7TvVJCPgkNnxMHH+GyUEU+9Tgh87G3xBkEPylxQnke+ixt00OfGHiOErjD2y57cesTdvGr5jf1lKfUvTftpHbtns1TxGP4/ZqIH8a20+BEa/5+Dh3P67r/ZL7erDz9XX9eGH7sNAc1/vRTuei++F458Jvu70sbR8Z9w94/K8Qi5vocpbojtVB1YalmeXN+9VsXv164s2HquCbeMVefr4JUUM79WH8BxZ7xcfo92k8/sF2r4g5rfmH+Ozjyh/z+dMK43fP+m+Ekr+CsxdITXX3YftX9/4W6H0MvF9h+GfQ+xh4H+HwD6D3PTjHR9B8vfDfUAn8BVR+GUV/PAw/iM7vh8OXXY/x8LePNujavBmHj458P1d8FH7xT7kQTKIfY/qL7SEI+aPtjxfvz+A3j/j1Uv5+MP5LpbGvqY58JRLeI/nzGvXXBs6fkj0R8B3yqQUR6Mvsib2hZEDoR6VO8Avr/Js6/9GpE/3KmPXdQ9a3pc5vo2j/ps5/Wur8Whj+Q1InBPxhKvxyewT4+akT/Td1/l7qxD+bCHwjb6Lgz8yb3yYFfmLe/Pr89/fy6/8reRP7yoCF/+R49QkM/1oQeXtymvw30/5PZdqvBe4/JdPCn3bt/JTMif2bOX8nc0LgpyXbtzInAv3EzPnduyf+d0yFkJ+3O7xlLRh5B705o/KDDAZ9b+f62jH/qqaU06hvNKX8FHthn8+GIO9A4gt7gW94FwS9g3/UBBhO/O9R079EMZEXmPj9r8d+K0rAxPn7gynnh/bqP63VQN87dX/bBPeXc6hGXCX/uYzjMQZn9Pyu/v+lbY4YDrwxY/ru897kY9sIjYkI+Ukej6JfNKQhBP5WSyr4RoT+UAP7/h7/bZT/f9Pj/66e/e9ECuhrI8V3nxF9m8Vj4Kf1MgT+DLzvz/R1r29Ssb9TX/myYPz3EBtXwQuOTi/PQ796G8TgV0P4dyTqVwjUD+CEvgQ+eDl/v0KAvtWTG/lD9qt2/yeA9EMv3o8GKY4gn3cgQcQ7FPvJSP3jKbRXM363cPsxUsG/gdS/Ml/2LUh9w+/+Be8fghfBiHfAQXw+/IA/FccfBugj0kf37TD8y/reYn3IG52Ob3K+H3cP0rfN3P+AOu9Pmsz/Lje8/C2C+XvE8Mtq7+dk9Bvi1tdOtBLgP0tDfjmZybLKvyHktxACYL9Slf+acsT+Um3va7ql/1Zt74+nTrq4z4/LjfvzY/Mm/Way8G13oJ1t7195E9oPjP7/uH6af6P/b9H/Jajfv4Ttf/5SbvgJGv8dhJC//XwqpTDgi3voXy/scwr6V4+MkO8+TDn9JHJLfFnR/DfO/fG9PB+Z/89v5/kege5tw701XYj59WmOlxvRhl8X0/FlLAD/+D/0TzQO+Tie4wgBp3gBxiw+/r579+4L0x+jNH5qX7/K03OSMDzGND5MR32QsZfXN+o8it4H1XjIdz94OdRp8df58eO4KPULej2PdQTN4femIj7MV709hfU1Qvzn4AP4ddLxQx7EkUOtfgkQ9I325R/GX4gv78P8E/OH7RRm/wLgLwMAxj4PEAD0pfHf6l3/YcYn/9J9mH9bX/zezbAfa4gvb/qDAvgbO9r+wkMyPrQ+fjDNh2L9J4H7rdsKwB9lG/DDs3t+UlX0Z99J8JWV0d9nlx+z7yHz++7Duo/byz6f0fo6Qvr3IfdxkQEYi9jDn4XnreVUlfV4I5z//CRmihPYu9/qoAAJfwrvI/N8uNP4T/jjW0f+5FgI8MVjgn6H5H4vKkp+SUVv/OVHBq7XJpmfEYiwzwIRSbwxOf5TGQL5VlM99kIdzxH5ZNCx59R+eOM/7xP2YRgAhLr1tzePV++ZJgbY//cLTsn/F/yCX385M9/7wx6n+f7Ir9v9P8c1kD+IbT+lr4r8FEUo+mVTFf5TMfSmCPlLGCIOCL1g50sU/YLS46H7Bvn/wv8hFL3RZf1TQYRhn9FVAv2JoejN5In9ISP6B3XpfMOtIB8RlK+iQX9UVvuU/fxAVvOTemoI5L/NPcAPmu1/kXwQ0D+OfIAfuqj+gSrov6lm4E+CBfxpyEFw/OOgA7wjUeRPA8+x9Gu9/aML/EHNJCDwtbOyv07n/fDg8tl9HvDnoP7RbVDAl0T7EkW/nKfxWqB9//dYo7R9/b8Rc74QPDD+7q2HYIE/N+q8oXkw4PF/1S/Yl9OG/3Qi+V8vfeLkZze0AV8aGPm59v3j2tpL593vz/v+nfzyW3TG/3slszc6Cv+RVbQ/Tx/g197TSf6Ygttfvafz8+TyoRv896psX2yPAB/C4g+9DRQE//jepx/qGdC/nvETPeNXKfPPdo1vxvmbghX+Q5T/qErCP1Pdf1dM/deeN/immf9YPH5Nj9e3RDTwl09U2J89guG7wuMjQQt8fMPDf4B3AAr/cfHqWPhcCv630fYT59K+NX7Br7cP/+4E2mfF+c+2/zTevfVp7wCIQHEQefn7WUvsR12y378i9qYRvu1e6W8mDX/XxeBvd7H/5+qp345s/DMS+kaF9nNwf77Ln+P7M0QDPwrOfxx9vpT+KPMLSndnXPy3CvB3qgCfPXv1V8t+VAV46zsIflwVAPqylv+FWf9/cnsF8eGhV7/2JkJv9ei/9VVh4I/qXP31IehvzBpH+fzLV0wak783afx6mKHzm285TtJWVbvEZ5E22D6aeH5/3E8/61j9ctafrv2HXMgfF5r/0nV9dVys4mT8b0dF8r8cFD/PlygGvcPwL+5g/ZOHG3/+aIHv6ILIXwmRX9P2/01fBvHmI4xeMPXp90D8V0LmG73cPypevv21at/MxL/vDU2fPBHiqx9b+Lc7SvEvUfTW7Urh+fUfefiXqy2f5mv0CzHxIxXxh6/w+9P6C/y9dcM3Ro+v+Ka3D1/lmtcv33r7azqQ/SCu7kd4f33YXNCOY1sfG1TnG5QflukLeL/8ZqXPE8rYvqS0oXv/XbxJvp6Ap14+8PJhLfBhzXkof/SPdPl+EWK78w4IOrcpVV8AiUvby/GjGFbGWOnxSlqOP7RPX9xzPcA8lZcNKF6hDVsT6EsqJJeszF9WVoBuZ4AFkXXER1lYWxcPQufAscYQUjbPsacQyuZIJa7zjpx7jBFjuxFzvscCvkNOAqt3x/YXG7JL4SrsAuNVbn7ZbldhE67MohY3QC0YWGApRWA606bKZtLPU7xcDEunbD4PdxlGy3UXjzFDQ+0XiLpv96llwKTAOA8m1lxsC4P3riqu1D4ujo2Kz8RybHdEOPYG448dANJ92n0ry+9PQWkikeourU1deLu2IyLI8Hm6TkDEeLXlAPSlK4mbVOt1wxNnW95xJBy/Nsh4vGLVJZE1ihYul4xS0QPBlH6spZMDf+z9WLKF6GxBZY+XbD2PezONl9uI0dZJPJI7jk24HzLHgr1Pxzbn2lCfhOGwIcQCoASx2gCRQoIHLjDeEQDE+xjkuFl3JFFgwnA1PEkTr5yeV8xz6Xrl8ZwkRfP7a9T6EculYzIA0RySyH3O8HJ6KuM5ZJn2mGL5vE2GdUL4fgSI+0whZHzQJUrauErEDYm4ycfbiMkHNOtZItEX4KPGQ/tYCT6YsISIKaLomQ0mweCBCpqhmIYTlEZwVGIY35kR64LnV4nEd510ufG2LhQ4oVPjp+5+Q/MkIYQct2hp22m6POQCmXA3DNu4CdvJQsxgU2UHxV59Mp4PX6PisQXcTGZ6KS/mSCjWflyJWcLt9QHXw17zLNTbcDReIpTQoorGj31UnR/GOsHkjLvQkXzhWmMRLo6rdjPyZOMTEU9QNDS+LzgR9uLD5alBhklggVfejHwMI0yHm/r92LCtQCFHyVTsz90ie1WHkvXzLYQmZrTd1R+iPsBX8nG8C7XOnYiaqpGOhWLAfCvUkRQ7I1GUNXN91hHuvc/0z2oXbS44lhUvz5JrSGdXKEhlRH6SEYJf59j3EQd7Utki2HpQ8HV1bR4hobCqL17B0ciewLo8LsRyAQiFJLHmPdgmrbWZ3hEh7kTeGLbISFE1YoDAEOjSIB+hgSIfvqGp+6jqjdAou8lVnPBElGrpAq7GJVMZUi5J2qiCgDs5RBQfS8KBGGo9/resIROEpzVQxOKnJAeTboznnUb7IJhwtSLJ1ekeT+gJWnYUqCDe9L2Ne6xJDXCf9A2YLPhiu1hRHEd7XMO48PregNhwTxOxf0osm2m2ZTNiiDvHFk+f06ype/TH67przHE6RpbK4dMhL2k/lQUbOWOHBzFDH7rzwO3x9k3t+Up/6o46NOOkDpr3FFtpcxh1wpm453ziOUdhehyfzS5UDmGUjG8Jps3O+gDR4v6YSUfN5CdPRoa/r82RI0e3e5xg7KBjLFtD15mQrpUeiW7PDR4m86Y4crRHOkR6V/PJJ5F0EY/NHXR0MfBaNcC5r8ZO7vHvRSKWis123JnOoUIZZxuo9LmuICrLVHVYjh09F5c1LrCeGzGXXuoFANz0bJU3qeogau5jun+EGZY4D+vP0KPw6gnL1OAKJoeqobxrr+lwY17rayvZ7TMIQcO+AOIK35M4mav0ktsx8LjCpIku9UOOuVyecWxRJOPICQRLFYGGz1i29Pfb4a/lQlAb0Enlhl0dxJ94LXKqfOGnG5cj6rJMdwzr92UmARK0dPNY5hrLgAs7hxkV4xlp0JACllAHjyXEGohwrRnTYQdV9NmIP31v2wIvh0OXr/XxhhBjQz+vG5ST7a6ZCNvuITaF+pOELf4munMASgIcMiYr7mELdtAA8KOGcIurnsmblY3GuRlO60a5sdzXZn5IwbpHLG7Aph1k5yM42ex0EByVS5sFKIhuQ+WGDQm+I09ssinHELocVQ58+W2M3GMzIR52kMsWd3+49gmc+XTpdH6mJzInn4fStbp66/OZjN5dPd7su+MNjaAhf1Ac1H/uhO1maVkXXnXuyt65oE39NXxsV1+OoQMYbLzovAQHV71GjlRasNmxjiS1mJVmw6nkomPkvswK+sXVWbxcYd6Vbf/IHlRRJa0DwEsRTIczz8n1ik3YRFDAbvEMfn6i6bTWtZX9KZIlQFTMQKXg9AxTEU7YVlER2IXkrey4bp1Oo5i7Q7icIw7Pbb0IaX4pUdTIWj6Uwd09U8FVcvSJj7xM9zi3AWKPvVZSKrSOcByShx23BczzLB94cbesgKfGMpoVL/L8w1Ui0TBXXr1MN3Lm2QnuvWoHBy2/RSd1caBcGpewFyCzLRgmLZlmCRphNt6HOG7ALK/3FnkvAVbQDK0Eq6e0THTISggptNlDsKigV8rSKBLHKp9h0ij4XdhWTgX16AwfG9QaT3SMMQy3gUt1EbGrZT6fEYWCdW9yKmVkGy+bM10JgG5lIitINxcJupNahoRUCCiBhqYv4rSpXegB0AwoUOfwLuRDkCQW16a3IvEgo+b0yMDz3L8Ohs2CkOIeYZqCI6fJHokGiPmxFAFmL6u68zRA7rK7tMp5aWFyDNLjOWCWM3RyC2GiLZ4bVkdawCCVZuZh2jZsC4zQam57p6CqPU6Na7eJtO8jN2jYzZ7vYTiIJ+EQzB0e+4Sjdonu6dHid1muBkEVMUt3aiK4323A49Iy5zjkpg04QgNVWA9cn0iRZdsaLGPxwVUootwfZqUjq933aWKP5RMSwChS6LwNhoPtUFBN5DVOX2WztlX6IipUCYNsc2RktvESTW9vGtDcTsztm9Sm8F6eV8c/Tm41GPnFxTUxDSmZB8d28DRwy25VXBO6HULwOBwM7dj1DKst4x4k41mWAjNME437/nbT5EukctKRG7umed4Sia46y5SeG2BI7c72jprbl6168vgjB9iL4K2aCY1ck4JVP09UNPS3Vn3cR6a9wedZbUvqMTppiQC/wnFzc4al30RtXUBotBxBzXy4sgMC0Hs1Rp5gF8ddoQhb127PsJJVJmBWbowVs1h5aKMp0tbTu8XbOb1dOjo32IR72AeBjDyFAoXM0AoHdW4qZrXZJqSmT4RAL3oBLtsnpayGpQsVIIIPejHqY+FYJ0+ETIcmfdHOR4lBrr2hsvmS2hw9KFvNknjVOGVZRPMjU0lv7IEM4an+0ODhzfIeDx4w81MneDJqLxNfyWXIliiHPVVeuvC8iysnhWmFcS+Xdlx02LAlMcyNq3Zdu/JSBFQqskzI7IektC+RZk6DxwGn9+u3ShTSHLM2Ghy4ZmjP4dS6uV74PpkuOWpdJJh1RgZfcA/rFFDsLyd6fBsggzGSFT9a2XhtLtdMIcTrJS/ZNc3LjjLQy0RGdFY4zvMJ00sX9awzMTUL17tXdlKbcKvLcGE1hgMZcaVTpduV0VsX1h0n1HxonTPVKpR1H1cHJ3tmYlWY0wMSDB+Ocqbmsqnv1xyCFddH4W4vB1jyH42nHIlTl9PeHTt/O6jNSZcpYu3OPJAIITgQcnG13LT0ZoYbkFnvip3wIdJdmuzM0O7TocZRsQ3QOEYZkDbYlKgBoxDJOJi6PBwpkNXECxWpFMmcR23lIZUr6CozAoNq05zEQXV1+XapI4BCQvsY60jZsepJwYHEilJJ4bOaehYydYx08iZvUiA9IpjAX2DOVqX9XuX1WpzxUyGcjmn4QRQWsEv39GQxpbsR4ilEkDO35WoqyVo/8UjUzDSpotLpeLZ0sPDpJEEzwNDM4iiRw5Fd/8S5Ch4DjSFdvoiJ1Cncsj6oQMZ7gd+XeRmJ182AWfD+PAV5d8DdyfS1SjW43TM6CnJYouhY6s6qAyVwgi9I8Ita0R+NEoTSgMkMfX84xo09+PbjzLc3amYClnkKbVuGvhCus5Ur0/Uq0CjM+Nu6lJAh0Lbk6wyXOBGIdylzsUejsOZxAuXzELaECX6pjJ2xsTcg55SATCxHxycTg0ZNkFTKlVfjeq+4KSjoOqPvHC1rkHHkBC4FGj+wOYOtVm8glWvgWTi1Gc9BN2mXlGOK06Krsyb0bTV4Vw/4LE2vB72zGeFkBXTXSqBD9rUkmpPfQ27Jj4ovbUMr4Et1KL4AfFBqphvPfcSGgGwgWDN7t7j0usFf9vUaalpakIL2BCsMM3Z1zC3qLshJuI/jDPONSk8MUoKnf8/zQdwyHW7bU2wYKRzXnShI1VjTd9y7tTemFtsxsgHCBp7xmS2yQbKdQQEH2zmJb8tYZqWe4iYnj9BNVdKd8X1uJPRaCQKQmh43yZ29+GGxiulaG9lGxpCBx5bgVqneuBEc2OXLKUjYB03kVTm46/x4VkpWhwfOi0lAA6harzHoU2gGWI8JAxHW8MyVMn1hzKH7k3if+U2voIMU7+F+iNh9NqAHV4jkBbM34JTdEXc4/L2gmTEoTBsSgYRP05HsLRPlungMbb0Hc1vLPZDNRMHoDJ3WATO26+0mtmeU7cMjD5M+KydgKjurJFmD6MMBZG7X5eoKSuK3T/NIJA8YvGd9gtBVzORFCPeH0kqClEhd/3KtphmQcAzOnasQ0/azcVTbZ5XRQJ5l8bhX2aXSsWqC2asa8BfXvnRRHDrqObJpVt4Fte312t8G1i2gaQQ8F7wUN064O3UpPPZ2rGSrZ1WoLuEeDUqCc/3hHIUj7SggPwNpZyn4FLRbn+2Mf7ICdWuAoWWRwrazniRrfJmdbKSNKQpPMaJK9jMIuMlhLuMxflo/83UjVH1H4RQgjL1kVRSVEs8romX73aBhuK6yI8OsTWKTZ6kjPjJup7vuxmCL1nSaybWYqXHSTdEnIoWUrjd2jlTyCBrsbFXgDU1RF+6xKvGBPcutFqdqjmzrI55g7PREjmh3HPUaKr6XBscezJXq94PQIAjlYjZXuUDns2txKsBpBk8az481gz7jEBcPLTp0vmEyqeemm4UUzeFKwfURlBc6WDoTIzQfEZgTzaJNPiURwkotCMUwCK4h9STYMMIp+8HZMHeWBO5s2LrGGZbAI917NAxeG1/OYLjnjdZRxFOpSfaVqm3IYkFdsnCwe+rpMUwH9WQWV0p7Q7gr8QNuMBx3z3Mdc0kAkkdTabq/YkjMpJlea7fRlbXaAFVJeI43f5U0YIhLu75jTeSnh4wcWld6nHJxbqrmxqzAhQLFtsuRvqAWCQHD2ark64bd23xYqcJZxb6k9QqEV4IOidtj2+xT73CAprmwZwEMfnDa4Fr0iOWMbZM8Rn8XqknNchrm/fJleqy2ydISaYri1YcY1899SC0ATiDh4o2VmftVDokXjQK64MEeWsa01S1grPA8y0UQeHrVK96WKQbB77OEhxgCrZUJBuTB4ylGTlSxM5gWKGseDVkG6Zh6BTXEDh6CY1vTYyXBJFLbXYAtXXuQ8VmKqXUKecDaYgCuTI2UlmWF9NApS9fjTu6a2Mo4UVxEaT3ywOO2ylQow/2hCaASgEwCaRV4bDGoWiRqN82xc102ZLGDVPsgtAKPUV8lRSGzgmGLOVG40nc5sAxj9tEyaz1Wg9EnkZgT25AvjVmfVEfz1ovZSa5yG6VrVdbw/TqWLAfc2+rZKBpDudGl4AtE7I4QUnWX48qbVGQ24QIo/rFzTWp6zFi45OSLU98YLV9qWg81XS9oimbEJb7cW7K/xXcvfqmFyVlnP076UTeEfT24yjSYN9aldt160MIRIJ5VHqJZNhy0rzqZNj6KFh+xjyfc6MoAXZokPLlW2g8pS+XSYmV0/hT3ByVFR+K9TeONJJDhdtakDmlcKp1oVkB6Bt++SPHYchoIUrYkPEuehMmRiEeXYCi6BopV+6By4iEgkacpSsOwrE2D2nXzxG+U5PZCKGjLllf1QHeG5PtPRqiIbDA14zDThg0OPF9R1PSTAEn4oluW+zWj6Fs3qpbV5Fj1MNH2HjN0ZRsuayzp4yJrZRbbdpLNJJEyIKpVwWJOwjIv+83olKbWp0JoeGKzcqbQLy0FAIL9vEeEIYO3R3g9kownPgCyyWceNs5RvU7TxFkGQVwGKM4eInMTHWdxRN3q8RgpT55mZ4Ig3Fw6bSOVXZpGtC8cCPnnELXKlURyzvA9ml2Ujnke6txNWe/uXekh270AoGDI303/CViBRHO3TNFstuk7l76waFPfLu0yG3cmPFLT0hhUuWLyI1rpcuuEqx2W3F4mNHzW1xTz0Z+1H9f0gvYyH1SdCsZ9nnfqerL22EPJwuYN+cAXFpiD+VTvdqODMRPa/iLss7VaqHCozM5MpCFdukY20mAO8OvzDBPmZulJMbSc0G1eTlOlyPHA2nUP+iAW56dDRZR6TvLApRa9bm2X9lFI9hGQ2kOEXupNEOqQrYDsPtzwq3hxzupXM7/8nW4B4N6yRd1AV4AVNLrFGejjBq7oXUKmwA1LYRibNVTQHQaIbQ4xXOpiCNZaFbsyRM+07bJbR+BGmw65H7Ds2k4mC+/7Ezy1R0C74VlRAnbOIpEY9qTSD0/eskMqX4D1aOfkuBLPHuRRmIQfMN42MPUU7FqSLhlIlU5SOq4vCVmcPmI5L5UhZe6AKhKBOjVIO8RYONS2XsV46oGIMk4ocEeD3ZvnKw5IIL8rxcU3iPujHnof1AjWVTH3lh9+JBiUyJZeaOEnMUtvwq3MU+PeNPsRibkGtHrtRSgFMlupVbtMcMcFOYfCy43XGaPrBEunYW7Ne//6wNc7f/NBjsGG8ontl0ve6bR01ThbdInRoc/qqkmflXw8w/cUg2lVZf3ZputyvBoOOvMFD78vJEVnaG1Oh0biKYA9YQxOFDxXPo032ymc4kkOoKHVvDCshtUa2RaJXOHJRtZcLNe6wntK+KucMoSAl8V+MzthnQCL5DwB8qvx4UnpQS04Sw9wmIBG+EFhKZPocHYIp7GejGEA9JJOxCy0aJwQL7MLMp6OxdKtbTNBL41qNcTNSHMdJdDJyeBoIsinYqjiQc83UyKq2jplySTn6FlN3mnXXCelR9vpBplSAa6DJm5QfJ8nKz2nMUh5VulOT9DQ19qICcKS2RoXi6Iakc0SDUsjE/lLB1WoNFQIOvRid+ssfd/xAehysnTBuI/86DlE8YVbIk2MlWEc93oYdS/y6V2taGfUqJAFs3ZuwraX7NpSYmcmUT+9S5gSEVY8ooecPEIo7R/ZrSGCqhpdFDpMLOxItz7wCLWxgecjGNlAQ60eIONerk+Ss2ItNf2KoC+LFOhAeWOomn5uFk8tYcjDK1XdS0x1bmuS+iUGurvol77VlgJ6RDHK2+EeJFc43VDyITOoyKWbLNfKdnN86KA0N25+YDL8pJQ2l7gYhtUdwLuI7Af8nEtbppoGa0FdaWmmfKpbc8IekBxQo5Wik/j6EDmgcoMgGTv46uZQLV2pls55fWhoilACbBGjpQdBnIT8OE0SZ4yNBb5V0R3XWiwAr+Mz9VlewLPw2gA93g3npEt4D1XmrCJtd4l6NPIK6J15FuNY5oRtKdIxBnrg86wQgnQk5XnPArhpMCWVwNzGCSVhsMbaXesnesTk4OYVt0k+jywSrWxL093E4YNPoUzaxdcR2lXLKLPHM1LSMQ5ypJVJxKUcbOMf9jLF9wd8RpRo2OEOi4EZjrttb06XQyLepAPzwOFkBPdbLVUL1UPkgrREtvY3xNziUGf2x63VQ9szMHlHAJF1JLt73rc+uC32NbIjIccvoWyHgrzaOhWisYfp66VxoYtzkbCkqTkaZ0OZf0BYkUJXJtrOWR3Wl1feKai+hcYYOxJUYvtjN7Bt75YuDGCUrZiYtI76Yxeuyuqt61M/9uufFllpwr3zNAYi8iGynlByCbGTLmpOTo2DaosZoTKyCAC6h5hPJJtmJL4Jg79zuq4eQX+q8tQrH4aPsg2HhhYrtU906vccUy+XR6zfREhdiLspMns/sid1xxCNcAsDW1DJ6M8pDyq5QzFY0zxSqXZe5Ah3vUTkWiSPgkbmXL1xUuVacYgOt6eNgzzTpxKw33Q7nmxFsu3uerVHZ6y9gPQgeAE92Sule2BKPj3hleO2DFHoV3Sx2erJBdcMEQQ3zXOtehED6jpTZ+k5kOc6RTXQxklQUm9rL3MY3QYuwhU+yo1cIoXetCxDBJYi3rtpQmtgcAqFvvM5dC1iRgH9xzBMNGun54ywUrKuy93ZG+7wt1LoOA3gOkVYoM0DYuDuqwQMhMGGDgAKkHW4PUwKjCz7PJeBb2TkxWeJhCHXXumPTLHVnHU+4oSlloK8m8yGzDK8uKnKK/Wulhnr7/op9BM0KZD6ZudLMMkGKjr1np0a835ovVPj35S4gDJCX5EqnKTT2Lq9rHl5CJQ+hmzcO3gSo97B070ee/poxCPEXDlm0D3fZPeKm9aUw8A4BOnkSu01cu8Da/IPHpCu2pqlD8YxaZcJq2Rk/Wrd4LnRrkvznPHQbJb7DlnVSTRID/aHubowEZpYBCXduYNUnMNG8azXBmze5TmlM5FrsYy78+qS2bcVUCe2rnO5oES1kdmwM9Fz1g85534osiDP5wZRGH2uu+OyGMpC/hAJZOufcOQGeqqqdRF5PSccYovOkUGHH37gAUTiArDqFn0F+uPDoKtWyUaCHBtYlXE4F2cxo5Jkrmlom9IIzqNpZC/PaHQVyQStQSUq6OrkdAontzS3PdmMEU8yu62sZSLb0T0VNLPI1Gc0UP4Oada+tUAX5Z0xw0VeN1zQDv4IM6RPGgtyh/J2Dbbr0Juczoa4/zAxx1XhOvXEtvABWmrlI9Ya3U3zy6vGHKyLvU71BGM3QjINI6wPW5yxaGdK+VAE+/XyMFd96+pheUhuzJ/g3OWTSzqz+gB3oyEWjW7Y0xtg+wFLxykYx/9X7QyTfE1eID2va1Cd9hFfmYZ+VM9JRrwbSiqqiWDe9QhtzSkt1kNCCxE8XW6aovUZeRV1g1NvpNmnF2zPnLNyDWpr7xkCuWs4NDLGBl8SfTokthGXGOq4NDPTowhkh6gIulVp9PqC4AElP/CT2vFEqfsEyt9a0SmAB82545XgZZ8GH3WY9uaGYJJxhraFv+eB+7R1ZIRFRUYgVlCp3uJiSYwKuHJXqJyOQJBY3NT2Z/vGzRNMdqie43ZXQ69X5OqqZX5D7Ulop4CavO/UcOPSCQP4fk60hgf2CDfo3IE5UvbN4leZzxdXKaAJqJ06OaFYRIm1GpxwMA3Jx6a+rwlZoVAj23nsJqB8JAwnVs1Sie/r+0rX9TRM/p6eSVvnFLeokHNknpJkQewuP2vVwHWyLOU5lr4QQHdXGnXIuZRgxpVg1w4vqZDjUyTmjtG+bg1729YgXczA9K55lZ5BuGaFZxUttUs9HBIjeoNATVap2/UUEWdMLu2TGh5CGTdEyfJqU8kH+t72JHKkkdkIUgB26KsuZKBkGa7uN1NT4mlwnFZOFMRTEYaw2w+lgooIGpCXkOcpICknF/V9FE2SJ92hJLlOD5YK71nJn0jb8eSeAFDzMjFEpZBSH0z+yIw8Eg+9jhjJwWW6Z5g913aEnyFvIOcpnnyihLhVZzV9gWO+2eo8l47kpWSHCtshd4rpyPWHrtpxP18sugWvVfVwd0cUQJC3RXgl3cw+XIgotmaTrQKSpLYdARpUUnchib6goVGIX2obAMfK3cYv2vO2R7ujAkKqxxtgrYMfQHOAKVu9VmPuCuEhYZ7oOV37dKAGjXg5Fz25tBOwNNmQLw/hb3lnuANPn/AiJsfgOtOzWS8bDehEAaiEIjnLTzVBuNc0OScKqK0oYv6JdFKzo+jZkzG5CnuSoSSc5Zw6mXB8rx9ZixHTOTnthzPfKSN5y4xlvNzE+nZwAvPpFrY/sARjsneb0KWbvqhqdLJjQb7pJdxwd8VJBUpwAm1F7SJrvRw+6LDdWuiz1s5+EOuQ1KksSzFSyAC7HwH53N0KTbAcgky4SwCoe5zZ6VVX1xxg4evBp2U9E2Je8DNhGzJK4lV4JDo7CpVg1Kw84GSTxAThTiJSc5DecN55ZyBHoTLKB3O5CVoRTCMlJoIg8rpIZ0o85xgf4Tob5FFNnxxsIsRIToD1Aq2RwnmntkEuMdM0FYFkoWOIQm+HLa+rTo25feUTM21mB3LWvIvCx4qsXLNmzQzj5B3RQ3kPcDRTeyRjz6qYiJ7zl9R7z9T2/vDJk7t0j1NLva4WTufV5NlGExOkrTp3aMQR6P30qgcEgE8dXcwkuT+TBxYJCck374/ITnW5Ng9kqzrXSSFnMwvJUwN/wWESOcSPx+AujGayx2580TFqdHAvpZAwey9ax4JZHrd70aSsWlqUMgmW3t5HRwKNtJqArdgklCLvqVFcA3dpOc27cs71nGsaSARnFZzMrOfVRTEBSRDNtwgbG0s67uY5A4+Y4ajbPc0RQAo0ucTBcxbjpBPGjmGcTVXwY7xYDmoewQQAmoOgY+vjlAvoSzXi5hSH0Lds+sbHi4lFZjaOCssNLxSccr269tVxS12QO2tqz6sQqqRgkMmsCdTuuHJIsdFFvnesMO6Je0TQ00mvu6xeFwyZrikj2eHAGzFK7E2IC0BLgrpE2VnBmz1p3RMwN8/CzcwR7Xo/XeJ6I7asuPq9nU/2csiQ9cLToZXZqDmWh8yOSBMjzpjcw3ftnBabwGSccRxkWSQKvOpa4gq+Whd/fbQyHmMD9OjlO5pg3Xb2nwxDRtMeN2IgWAhegbIGoeJEczbsrANKroNM3qdq1/0iOhPmtHgtMegne8GT81MhPFqXmDelRN4qYHkcWq2Mzm6fSwxAGXRTAtOXAw3q2MpnRj4Ob1k+aFpqtc8c4LIo5HXAlJPHeSyu0KFsxO4efudxgDbipsdZ/3Ya5m5ENxu3jsQalnIFaAUkBz2au3oyRsiKP9veMFLmwt9fWv/SkTsu93owCrc2qcO3C8cVEIUk5Mm6OWHC8Ptl4oH2rljCDOEyRbFOlm8dM961BUnYOEoYn6aRZBaqK6xGEX32h1Ay5BRtPDWFOJaayfXJw8PuflBwh2X7g/7D4rVRLfkSFnizewN8qNyIeKl/7mcZ856Bm6q36xzMhci/T56Mub93RL6/e9WdL7Mam3VMs4IXgohGw4u3CnMyp8U0AtBKJHxz9hUk4akBIP4xXxMNuxdNcX7Upbki3mUIwWpfp9sKNUrW6aa8ersO6gcD0a2DBVYDUmQQ4i7nzBfsVLVg3pmJngpr9tTkSF4Hn+AV7lBKh3focmGvpBYuzUPEY3wm5DsPGAyMcmL/uAOkySi9Hfk+nuTL5Slo9aNnOIqsakhqg/HGnPU+EKackYyYSpbzHvEtbYebFc+gPro9dEJntnLe6/ORMVQNO82hb32UP6eDUKwb50PCPtnTKdDypa4O+Arfs25/bRHVN32/P/KUXTfS9Rwo5PRKvD+j2ni2Pd1fSpAr9KjQJHycA72vfpzcMvS+zQGQHVqz0RDHADEENTnRH689WeUVFyUPZV9x8rDWeRhAOXTt+qBXwd5q5VBrRelfciwWybPzoHftg0GM8pXzjSFqrzJvjy5LqTqKpiJhM4aD5pryeLTgAo8TE0F8wd6Updm02h9LKJtM4NHCw6lfGgFMvcrtxPgIOo9LDVrrOUUOefsZ/ghda6U+wOfIl2pSg2PjCiIpBffLi6ronqv4Hk3rWRSzsvu9xo/83AMvYzlNZ/xXTu8kn1Hphz2FaNyRYPo2Lo0d8rWTSt3ps8WFtR8OvRUYNGPpxvDZEgFqDcYNumiS5cTseuSFAMTDtRPPRMqWedta5Eq8VJZJMLtu0d5fHYDDoI23YznVLaPp5bwabctuYwyn08dVKPwGWlPtQZIAgDGE7Dk6R8HL5SJcKFFnWCvW5sI8uF+U+UxYq/iUBvPpORQl4JtNKMuZ4Aw6HgW3vF9mcF01hvGxFaonEULR53luTPTgvXrWTxoAAxwSrTLIaxgfPCyjJs5NqEXs4fg5MOMEITLjAIVZxYW+SYDBtrc8yo1I5MbtRaqeYYBYJt9wb02xknv44NRSlMXgFqyAzrLNfM/XICEI4fRPrZ6VQ23UIFQVffzgz5HVi+v5mff4PkOyQcazagQ9TCoNXUxwMN7rZ3ITK9VQeU4TWqQjsnVUhNkrlCO+DiDIQLeboUu4c3qWz7TmQu+d4jmtda98ZK2eFzoNEPSETZXjcHi7OI8EmtbwnobYfr/7LEif+ZDBN7A/2OqteigLZClG7u5HfvPh+6hLRdb49oM6OPOFo0GJ1LUiK+bU2YQyYdmQkSz3dLf6yfsBHE0b6Q8VPiw4CzAxdiJMvutI7/AVTkp2QE+wOZcTYlmhYu5FUoTB1dMNXeRLLFCPwLImhNpX9KElyflkGOfzrNh7e7KQEghVXjyUCnL8qxwsGRi3EYpJQG12ELSTW1ecA0G4Z6M4eoKfwC6EW0AEohZFCpzfv0AhQPjeMSi2q7KzFyjiz+zXBKd3Q6dZxPMP0Z0Bm52sR3JvKhKvUc5RRwgVt/fhml4KmUHuZzhvQKfX8cRyrxd3YLdJSyu/t3wvpJsBc0Tiplp0fOM8UGvscTQxv4EF6pnoaEpb9MPJnXxndVZmpRsU7Nxj9PRxbStJn3uGcZsNBVmCd9Y6UrSTcFO38/R2tmfzBW9Jf44OTQ48BL1NuI2SFRmGzp5fdqmotGCkyhLsMQ0p22HOSBAgDxwMuyQga+SG7ePFVmXlahuVyO8NckaW9WUGubi3wzmK8T7Ip7gmQYL0ds47h3Cqm+ScQpfxXcWmp2xiOOfv4SFEIVwLHic7YOMWPInDFJwd3rzYnnEHIYZ5Fra7iIRMUm6aWAvvgxQlnJWjaJmx5NIrkoCi5FTuj5OuEd54CW5UMQFOCmz3O509FC4tsT7Xrws+gQHVno6fLQkOMR6d8YLWDQfz9zR3nE417cOgNgTMAiiecTa8943VPTwKd6hbk3GXrTFP9s0l91SzWa4y8wyrpCnvqCjxB6ubpaHSQogOnCe8eSEkdJSCPqtCYdSZdKFaPBHnOoJ48Ts+hS3CAGp2fVQ3mfNGJqYiLD65vEpYT0aCQf0+Bgw9Te0ZLXHoesHWtXnW/PVU0Bs+AV6HRTNMP6+g29VP4gGSGkQ+upCQd4pd4iI/GLVxbI5s9/h17M6+oPZGwDC8X/OXkiQXkWeTBJJOui8YXAtw7ytzpycxJ3qW8v2+ZHPEnuYpJvHe3CEyiYQX/igTDYOdtWwWHLFbsGdk4kpgVp7Gj4/0X+xEtu2CX2jmYoATweupDLYmQYsGql0OrcQbAdi2T1pK1Roxl1U9hCdm9KDYNAHzaAW95lvYHKsMPEbGX7sreATNEmfvKob5oySfSNXJOECkLkwq7vlYdDmxKb6+PPQHdXqvivtw1eJN+4gjImIFm4AveEVGsBx5DWQXKrpQt7W+qeScCHY5HP+66xQGZT02Sn2L7k1DkUl4B0m3kDdgOq7Fu/M7mdUGrjshnrwwzZlzoWmbLPRUBYzWazC5uNf7/UMIQJRrNQx26ekv99pYcC9Ltw0b5XqAnJxIWFd43uxeFa15cH28RlQ1wdCuZ8MBXwmTv84mZFl1MSILpN3Las3oy7OzjwsaDVuO5/UZuazuppnELkTmXY5AKRWFYTvypMitNgGHJOfWg0YheJsh03h1eG8SFMNXYLNoGpuWu1sKhri3lX2ab09q9NdR0JuwI1HbSW3KPmW+hiliSUniMV6DuBEKzCjs7AeldJOuN/glNt4vQ+P1t0Egho3o95S4sWRHXYcz0MRCzubkfbB8DUXrWpJT/K4pAYGGtHrw+GPo2C4az24o6u4F7rNqn4IpRoxUW8v5kGFqfyEL4uWANZqNKn82O5sVnm6noUlRTQ5lQBQkQSQEXnqDbGNC2+ZjGtxTujhU6mNXox2azmdsUfCMO/TyQNfQbZEmz1kMthdXc+IDZ1YLGdkyZ1JuQGP5bIM7jtu+gzK22kxxXEO+yvZAa6IC0+k6a3lUoz6qsFLnoqOoSXSfmro0ztWFWss9CSgMIVq7Dpc0c/irI43VbQU1bNrWJ7hLWaTnW8/7+eFrlPREFeSuBkccEASdi83KKZV2A8Eovu9OlD5pmcXGYhzBxqtyTbTStAY2v8BWPiutiF+XFU/sbj2PNes5vSuT/CTEQxDC3RbfuuHgxsHAnbRtgsPQPWSA1ynJeA5sbxsEvZ2ANqyon1ns/vTHl8rLzSPm5SY3zUsZwL0W5ZlZPbZRJXiixwcFKoZ4pMgXvz+yW8g7Bx8+tg0B7glbD34A2yOBJCddTRm/oM28dtsl22GlZPqTYp4zuRO2dJQ0+fDhac4h4sGgyZLS9cS61ud7W1PdVQGFMg7Rc0LIMTqFSX1TOyy4A5fYNiTvavTWwazz3Y77xWDr7aT28DXiek4aH0om30RSc9tnwKACuuUlDbcrmIvK7g+6flYEyUtj2Wal51OAHHbf9rl2VNiB5efxXw8O1NLc5bgtnrMzREM8U+4MWYduZUopRdAZJylYw++Jfx4LWJsbaBiWmt9isy+WACAfPUpsyePA5N20YAPqlEnFktg/tMmuP5WeasYrI/a8ueNZuYV3gxJFnj5jUl9mcXBWTEXZBPaEQMckYZhGPEQeXNriFvEU05ED34rNcW1WJm1LbgzXer526BEyNq94BkRoPerS38/4cNJJJrb6fsjyTrozKPAEbjnmqTwiAUKXFzBeSNiitnQk8OC0JbVuoTpL4243ZOKArHbVCXZ8aOXOfQq2YGjuGYsnEjh0qLBfnVl3fB8+OUKZjWHRuAE3HXK6OFmaf5Gtm1jT6gTlhwOVkFuEWk1HDuhRAup6LkjgBhZyFepcjs2xgGLtLa1GCIhb2rIhkeERE4me5fMsGPpOdsW8jKet0kUyu5lzgIx0X8/rw9N32+dBo7tyCKNxwxTOCTAdZIUG4pTb3OcZlgz/imgMny74LI/I2pSgdrW4+Y6ZJoYJ1mHOAOUrTr+hiDolMaGjL7MLM0GcX73FIt1rfmXi+DXeO3qvXEdy5H2AKwBf9nxZuSn+eLvhz7htwSy63s97Vu4b/cQG3QajYm7AdSlPAX+bvBdCZNyxxOLJa6sIY3WBFnGk08URHcXCeV/Z3J4Ww+YiYvXCtitHt5NEyazRnb0VWQRwKbCkleOXk+uVttSqUqD7CjWMh48gsZWf/rZAZD8DrDxDs4W7BzHqR49ZBMmypXrJpgV4ZqJlbjUEIKKvOjSZ686D7A9pFOgP3x5hP5yp0VWm6bSSgklmtd+OHNys101EDNHiHvKcDQZ0fz6Ux03uDHfwXN24mUDCYte94fW1sNA9Y3f9Zfb9YgGo+piVk2CTZ+Bx8QxeOG1rowuahbg+5ZRRNj6s+0v5hM2utaRr6ubLPUtoFCSJ7ZWyk2dburuv4VCtFeegUnlwtrYbb+JBiOKzUWcEc9Zk0GiQ5QG/H94eD+BzfCnQ8bsaHnkZqKnlbLVh1SsOAwkUD4sT7s+I7Tib5WsKcN1h9/bmOaSUEyMls2QMlGaeAdwScekbIjECqK6VsmOaXaHmi32mq1r1tHaIAg2ETBLhkOBaA2M8QFfRO+tj4mYv3BQcFw46CmcJILShFeA/cMB9WEM23tjq4fUqbgHqdqi4gWZK2Ud8h7WWSIHx3e779hnFgVGWVuuMWXlRqOYQroZv0r7vDmhqP9gtnAaTtnKuEoZB7p8vvSVB6d3QSqktrPEYh2K8IH60ZBYghIXrBqpIt1ZRQfXkjRVgeF4Nn2ODLKcsFLuK0k3protcCdQM1DwEaLJ4nvGucoar+00CiTEbbhSCDiu4Jy9T0s/HjMkJvg9mtV6BuAEPVONA74McuzF3WbVzfSpFEzCssVH5gl0F10OrjQmuEMgooSedQl1Gs6BeBh54TpV9u6SNIwLPyrSJ/DnD1zg4a0CpKLd2W3YEdlLwcaSRMEPAm2EbOSufGUtxAYWS+yNlR/YILtBetCGtYN4jCbkMuNkoC64HfW5iMY6GG2uE7PlgFpafZT6pnl4m9+MiPB+UvAvn8IRuqLkdslhcqd8SPL/Laef4GrOh4zg/diHEs4M2LLme5xmA9zoiOdnLLTn7fDWvPqHKlXQ+j+KsmWvwqRu1NmMZofXGm1c4frAQSuF0wHRyc8bqzQf/Uvhbw1miBzxqZr4A8ypsxM08cBerRVbeqNnEkkDIT8klP8vuldafsyne/vTC4omj2xnWyJrDZgeCYm+x3QYDVjIb3fP+CILUND0oaQxwSFYj1gUb0CPHRJvErCxkJefO2rQKDJuBEQCg8GbGUH21m7IRp/DmUpAEVspLd0SFVaPm6lbsFqQZS6PRHdlD6cGztkh0ocda8XmrKkG7dl1D5jN4rkdM2tq8N3Eq1Fo96qgYWswSb8S+qkIw3IsbVFi3/n76tm3LHHnvcLO4eHsC8A1Oud1l4xIHm94HmsQCZ0iN933HC0LNtRRTrm2R5dHVys1dzSVOwGqPb7L5voeIh+v7ehqkPW/qDc8YkdxfSkonvgk1dc/5ymuJXh0yacl4JhJl9zJgtGQ8ccKxKpcHr+4zgdxLCl/JmPDxMxZEBBnZD4Nq9KCPEHJauuacqX9ObJ7L9EtZTDB6U90QiKMob4RZhIey9GGIeMOxALI5EWUwdQ8b/x9R17XsJpBtv2beyeFRiCSyyPBGzjnz9ZfW8dwpl8tll48E3bvXXmunjqSAkJ8NMaOeOOsV+NjCBe0ofuJVY4dVlrQN3yEfXQiCKMiTZrS558+XtzgE3RiLfQM+ABH51w+WL0Bsjy97v14Eps7pTHM2kuKJudAS9UID4jcPQPZ6XrffYor6EIo4HEYQqbEL+q4B94XhBUrk6WyA7s14tanq7I1xtwjAPLF/lXnUjbZkusS19FP9uDHJ6ZRr6CJk1DHIOruC3eLL7jvHasehPc/PN642N1sQhpV9vOu6a8p4f6A73QUXMcCZYYzGFdUzcUh3a96jaCh4Pf0alNtqce+8VkAlrGCDiATYSgG4arzX+Ou2W0VHw2eHQGonYg9IjayvOko+lwTHpGmDA17NCMY+RwtA8dSiuLq3pLYShJ/6jO1eGS7cbEcm+iD3PreJac77kA5RSEB6nB+b7p0wAKX39miGdmL4deOTcDg5gmo2aY9k3vRWm5v2el/C3YuNc5gGN3T8h8/ScEGGL0g4fgVFDDG7nislgMKAs4wZhF/KyH5sDzf8FhF356/EKZ3lnElPuASZQ472+9DcC8nsn7cV/eHa5O+qDR4fjkeiiT3yfiDmFVB7nP669ynYzRIfv9iC9xBT7jgAuhhYZSwVZymYqUa+vcmhcpbhNHNW8WLJjH7mMvYoLVjfYorAN+VucMORSIewgYxb5/V+X01ohZCUzXxkmLyd12oA4FlEIEijc99+J7Y3QQioCgd97wMImoNw7PTywByRPlYYTWiYWs0eg0+iAJrstuY30aW/uzRrZtei7NeYCTW8L256gSqmfraah70YI1a3an2wg0iZ37KYXQWWiFtZVtffzEBSlM8Fe5fpCoX9xa5j8m4qWbpUgJTe3py5fDlKI+tu2bkZkcvN28S7W8TrSlnKwgyIy0T6ajl1BptB0RWTEIAF5+1qlCvzegXTYXeWLEdiStpyep91up2OYIbX4akIDuonsIxLM5yzpGGF4QuxLP0tjetS3cXOY/XsWaev8IHyqmB8HkNucCSINL3lxk4LRCi/Ge+L8UNn0dfsxapTeC9M1JRHW7NRwI+b6byj0bdI0fqETeaEv7xRRrlA+dERYE72+1fZ8fo7ppAMPQpVqlq7IBVpflcoUn5ae+FrbSIf8XWjumUOx1bVH/87G+5sxJv2jSo6hilQ8nsz6wA3jIrp8Q1DBxwQFeF0+VeT32wi2Ej73dTPz6aNbX5ZE1Jhl8RQ9tlKg4NKlhMO+uPTPLyMJEYmg4j9UkcQrQkyPF+fQ/3kMNujYnVq7zRjSdt8pvxHuZnux5GpQ24+ktsOsuPzZ5b0EP0t6oENwoGAr/RCWzm+y0IdsVKIuOq0pbSJTE2aVcCbKd1fla7MoV8ioMOOznfxR33t7f7Sa+CvOryue1tKtt4rKv9hW4LFvTP5sm24rxA0xz5IvOzaUJXAjPcDDFhkOnWiljnf7/oXalsIVyXIL6nz2XYk0gfy9c9XIGU3g9rRZroueo3aRHtdY5vzEnUAf5GqfhFLdWxinC1EF/lGDyJyB0EQgBecg3xzzZeg6z3fkonMvxSHbJG+uIukbsdNbIQOYgTwzscVMw8yNxF38G3eXhivc87l4cm90XuDhjH0Sk1y3d1lxbX0vKloLiK0vuG8SjFCf1By4ICV5I9iYuPYV2/Bf05w4HvluEulbrAZesK06Xj47S5a+6g2Z7XUEDUy30DahT9q8hGxO5hCw/j6dcvQZkkfjgEJlyzXLFOx4GCqZI59Jb9KSsK/dTr1P2TcvcLYTbiq5bsuUbNDE9oLBJzpDpyDe9r8+X5cxQNAN0mKNXA7VIlahBilHHzctGPaZsyaJ8S969Uykm+ZNEmgeRPsR5vVlkOgMmPotsJJHjVXO0TqeSJ90OTsn4onxL75eB9iGN6TFDjndwsdJApjSzOteaTGnAjAabqmflIzN7U0ao2Ka7zPxzvs0eb8qqAC5i5tQbJViOhea0IMHZwdfLWsXijYBG7FRFgpFGp01N2DIdQPFNIZdj58vbS/C1f1UX9aw+0xmznXpoRy1lixpb4pFfFdqk8DZw+3r6hR4xcqU9o3Sf6YJdMtOvu4E+MuD+SgXN1M/RZ6lNbqLlzjbCcePWfya6lnof6pIlMUG67DHCbkrVQbUpBBfB2rWEJpgWVKVXR/H/xLKv7RSxARBNuRkv0vut9Qnp7l+aTmcyr2HeBAXPRtLvSzmybKvAsb5EnvxjXTnQrGL5CA6bMPLys64QX2Jc4PtMBqYRnVQJ8sn/Vh5iFrbTdmEl6LJ5+epmxjZVWPzJFAzdvlCDMB5+Dr7TmjVvIRwSxTkeCjY2JIHGFYRwsts0uQDrhIF0FpBTA2gmLOCLLnZmiHboK6g4QYMwTFWEzs38Sv1yOhyLxPjZlGf3n0Bwrg6vKEZyU14rqA+v74IH7TdBjIAI+PQLuIwJ4e2eyjIBifd7yDmo4pG2HQf+lgdqMlBhJWUtebnxbXDAlbenVmtYKYvt+O3+VTw/dSuLIMZsbwooT98qcmh16prJqTwQTzs86bnMC+whCGLEN31mvMING/glHwrb/I/liDPkS+4eX273k2NN97rjdqeE8soYA66AtUtHtqDeg8qRcxGXfWI1DjJJVTdM41tYCM0OWg9YNKy74LfKwgpjrzocJfy3edHB+anL3pAD7MHcNPTD1K5Udl+PgvQ+AweT6Th2CT1eUa8UyxibI78aJr348pINtxVK9aeB2iJ6jBA71n6hPMIn32fa8s5qAy6KEz4L38++fh4ZI8cQrDhqrt2i7Tgu/ItEzwYFdmSJ3uTpaPn3X5fp/Q87PIhfjpRx40Kqf0O1SpuFbd1HepqwPzIJLIdF8Ok+xu04PaPRL/jooE6y0c3WY4tiIDpSoOq4sWW/GchKwxNdflvt+Gosq+tdsQnQqC9RCiu3kD0kdhzyYrUpzp27fa3VlrB7WvpGldZRLKtapoH8VpQe/0IMl+30eQ1kzwi2srZ03dDRLbbHvdqhmIqlpm8AO/oyR+QAjMat/b4g62dTmlsJ9IfM8lP65C3pG3dwRDiXNw3MB2McAhXoG0Zw6yTa9qJOMZMUqVmb60/Hkkt3zrYRD7PUIzbZ5vFLDTHSpRP/ALTX3Y8YFnkE6iCRfYpiJq0qpBtdmu7sRw/JmjB6tHQLtA4/NPyQwwoFIFa7T6NF239eJgu+U/z4FCK0T3Pmsal27QFQgqdBMczVcqmNW9f5bxduXeILmmllJRYhsTVBOgm5jS9QaeSVyDWLptvycp7FHFBLaX/ePVLO4LDEHhmPd86fRXnBvYo0M04N9ULQqdBKIdtmvA8YF2KKEIF5ZmZc9XZNBsIwaXNuti/Pt7NpEj3KjDCBXwUGhE2i4WgG6/pSwFqINffNM2Esdwb6ldMN2MRsc+JriW1UcT8suE9x4ARB3d0zZUlUTtlVl2hmn0stH3387m1/SpD3G1qN0NRx6lQ/DkLmq1se1n5N4llhjXvEfXV3tPSHh78a/WNs8p9Teh7a39YSxAUd6a9fAF8I5J9PfyztRzoDkX+HfQOcTH0LNo1GOrq048vE95JD/WxCzrfJotvihMMwyjA4Ecj33HClTbtYjvheYJgKVwpWObxzgrKYw8K9Z/SQrFNJEt4TOt87pVABuR4MJTfrEEukARoTgEXpkrXJx77lLDK9UpwISxTLSF7KbGyil6YrfJMVJDaHfUkSfIroizFn2p1LB+9s0aP5zgeBxqFXPlwudJerQqjKyfZ8W2vFt9RRYZhsoaMwvU3TswAcVhrGighFe6rG3BBDJKCN0uicWXYYo5IjMnzZtEbnH54eXcsbSFQaZYIdZsSjAfsk8c/htjh+cnJCAKcHHbK55cVtmUSRJCfNzSwdf/En4Q1lQGwitf/O6N37g8kEX1lI1XFJIG7W/MlokUNnxmIKJImcJv+oEDoUCE8EF7M5LeXRQ30Ejn+4a87NEsdKYsU05egujRoURou0nTmrx6+Anx/Z4GR78Uxa2W7SFD0yqNFJUt1YOCkxxOm7SGoTdRbt+jsDDQbmWM65n4AsOYo/Sd81hKlqt+w4Gih6oEa9Fsh7zjF0rhgWZlT2JmO85TuouAX5uHzTF3Uph5S/SMODze5viVEcoE9gzjX9gkVleVFcsMBInhBl9ksATKUPhR24KVNdKNVnaLGqjnPj6gKaaelLqTqLWugVeSTApY2VVvnTFM8Ntjs5OuyiSO69eXH2melhBpk0gIPiDeheTfruSym+zhZPOfFU8gMHsKfhRADKotWpS8n3Ob4fBlmMMtBuRnGeh9hTbKaCSxwbcoMpe9FWBGbjwnnH55o+cTPEIEMQcme6xg897EQ9yHB50g/Tk3n+i66K9JLaDPrqW0eQURfIMtMMdA6D1hujfksZ+Q5REz45UDKTfxF3agCcMETjZX6z/yM/BKidK0c4VHO3I2OCCG1Kpudab4YT12kLKcygA/ni+KhlSycrzH5HbS8CS2uEe/Ol4EBRsPnmwZcJ5wBL/wyat9NFvje1jKsktzW+mJBCsWKAeBdgvDrOUcvY661t2F4L23uSbLuSOawc7AUdIPo8cRciM+cbDHs9YNzg2etjDRea4dx0yERRZroAhrrx+JJBGZA0QyyvjnZ391HZma3qCsifdmDgwkBzgkxfNDGFZN6KI2bJThKh8r/A/oWFV8+c5wq0bQ2xD009eESDmqENIMZwwmA8PL4AyI+tmgGrGYVFnae0XTwDG/W5eu1L2SmTg+ghx88WBnHeRtVRWwUUhpgwg3xShGnLelO6ZbdHCXcYN8x1qRjZaP4QzGELR9m9J32kNl1DBYy0cvUk/hZKZHW65kakQtrdybFud8IkRCi7onnLWeE1SfLTUrU3/kxfVWp4TOgJh1Eaw5qKHd5q/3yEOJKFO4j5dzDWbf5UEQVAzRPOexO8GzoMI+0OvcHBWBZZ+Sz7S5MvkD5iR+U2xxhquqjLM8cmDVAZ7J1uNZ8hyNhQpP9l/XZ6+IYOHnRBq8EdJn57HcD56aIyBn9ZEYYsmKqtdF6oirKj/bRh+a7rS3r2GfgVxoqyNBYTYNA2+FvIvy5AHEvOtZESij7hlsVPMHx/j0M5vdjrIgQEcCi+YJBvhSfof6pHtn1Kb0NY0XCG2PCeZWVRspfnyK4tLGz3MR9MMFFk0Hzwiua2AwYNc0SOVQeYbupCH2V8mWCPQmOWH4Bsxn56uu1EJxl+UlrfGr9n6ptA7cS8/EX/mehyDbJvZWHI0gg/iK77UnP9P1VdaHCKiTpAmcYoeZtDm4EJF6uSp7RMfP2U90UugH0TvwZFArxemE71G5X+rxtEfkfgt8CpAcf6/3a68qKZrNyBB/laLoDJMPFmj4sPDSRedUHq+Cl+5oz/w706Qrdw7S3XL8r0L3+k5pLdzf+hWwIvKO5GzAxV4PWblDM/IgNc2wPBmdlTmMazm0TbHnoSN628MUsWE8cbUJzpA/0OioSqWaU4+XUM7g4R2rTuHnr4oEcRQOwrc+VaW3IjaprroP15eX9cf/I8eddw3H6Z/Myf3gjtLfk8fwVpP3J4q9Z/1b6gCkgf45LEhogOOe/0I8t3E/T791kA6Z5kE5oANOHmrJtj0/YO+RQSOPXD1tV8khwh4HewrTQ0kDOeekOjpgqWfrZTxg/pG637fuqKc7xLPBkJVRt3Aay3100bOAS7sIhBuBj7CLaBsfiyQvVm/aXNVSMLvQJqauvAjN2/cPP5f0WXwask7j1bjett8tZ+O3Z0pRFLJ3CdLZ+xlbv8X7IA3iG8RauXs0jHZFGFWA+wofs2Rfgp3FvmEHZewrgIbCpqqP+Mnr0QaOXs5ZEs0gG+s0ZwntV6CWk5YZdfH6gR0w5cd7WyJKrnBtjPRNb1g6O5Edgzg73daV7lmYxxoDe8JC/WZ8MWUPeys7XNewX26GvClJEx8JrvG7ynuhaFvMpM4dhT2w3P3K9GD0egP0MH4DRtleKq5IK4eFuXJFDWPOjIHzQmyz+NLSlIpiFAu7mr0VLivABL1h9nigF9oJ8sL4Lcjoa1at97gXh3uoF7IzXuuXPYAeJxYGLljcNbT8V2zuChED8kemnkFb4J1KqJB5yOSyxaQsbFFLZktDyuZ5n8LTeWHWd+S924YXObqPlaPc6xkKKl+B8SPxW2mm01Yfdytmf+5Qa2p8V3EfRek2WaXQQlgsZtUJ+CN+pVJq8V5U78kMCNLUrwfRxpffNcSjXL4E6PN8tGUvPPi1VKc4BXUYf7BN0x6QpowfWn1+w/l6e/zl0MhbjWq/JB/aTYi42gR2kWzjWVqJAkcfW1XHOEvoeKWz8jmeX1JMAlQjzTg2B42npYCXRmSDdMIaex3fLx8g2KmwADQN5z12i1sfl5XljlChiS9nPF1VsrGBREXVvvHF+M6IuMxyZrAJZs+lX9pHuQhJ+s2wuF9W8Zdk+1WO1pQk2w1m6EpBPFIs2NIXJM14zgneuJk4PgcdfOF2xUZEaCDVKE3TZyGFnSXSb+dvD7JZFJg7RcM2xKzUSLolGWDsHaijjiYf6WhzO3G9VZtoZ1pO1TMdoUYcSf6GU/fovF/NiSufg/R65P7Ryqd5JKL4PvY/u3+o8PaBiax/8/eAdwcSoL8KWHR/zgK/6cijV8rm7u6+BSGTWKrfarjEAMTx0+rwYBTrEz8TFAW4swNjsCKjXBaKypPQzQIFzMAuXlzLfV3bhCxhVkT9wPMi4JOUg6qWfCgf6IGd9YfTWdVjFHacxgOtr5hmqfkOglmoRfpmnJ1aOGjSazq6mqVoigI2wR6NTtPksaE8JEefznXsKY4ryKMDqJeFLU+K1x2XFZ9Edj6MOb74xL9y+F+3LQVMqpJ1Bew+g4ug+DKThAMxXqf7toopXeBh8yzCU6oXbsPoa0F52R5ezus7Oe938FgznXvvDOGbNDaPEv0+rqRj/Pc1WGBI3/5r6qjXZU9jNhAEx4W2FloSVs30C8vM4BQ+EA5m3vIxx4d6K1+5wHhO99h+PdAKydjv1K4fG5LtNK7VsMbhUKYeDfUWdD5wzlsP7pCxT1pv7Bq8hR6xUawwgSLWF8Ye1Z9XM07Q+yHdTsyeJPcwUf2TA9dg3LmjFY+mgp4/FfjPLCR1uVpe2mTCoVKaQPFQCXOVBNV21Y1zV7KRgS6LuYg/53jcz0581HP4gTIR2FU97rOAktBw/rFDVwo2x++UyV0HDoNvfGaUSIjJnCliT0dPaHYqamJY9yO7jGkyn154mdZpma8FqbG3RLmINcNZvvgrM61QW+3DUqxT+L6qMth5MpKaRicYJgT6qmuzWzBmUGsBpfH9icWdJF+fb8bdA/s8TBCxr+wXjSbwj+U/TMbE705zh10KdUW1H34FS2O2Hut7kJVAjlar2aeH0j/6GMUU5GE/ch+Ha+znb7EhevuT1UjTvIE+zIzOOQRROh9ObUsQPa7mubw+pWNKU5eyS0rvMdrHEpxzFuEbCkK+4D5E/nIPsCOK2pFwZal92sstpNXW248nB2GSSq+XAm/reQGKnN7MsSO0YWHbvfd/BUnAMfCY/UfUwbJT8Sz7YTR3Wqaws6gECUaPGGLPZtmp+Yha4rBM7lKX4te0RU02OevjnXetVZHLOXGqPKrJvcyR7JRBz1bBeS1aRKxF/DF+rqT2EZybk06sY2Wui2dNC5KM30dFsLyJhl9EZ7TM1L7dFKnmmlG50VkUtSp1gRld6JMnNPkGGYeh0iYP73QtKKotkt10V5LJmsYO3ed1vzWBorYyGXM/ID+g4Cfjvo83/WemzsLp7B1FpSiqiVFCul9Q6zKgwS3Gpa6NKr+omwisP9+SfsSsBQRQuKzc1iy/m3cOby0oXoFiH74f2wBG01mxnNy0KMjRXFOBDSqCVTW2IuUXt23xB3r+lhw8AGDS7v7fhLe6/P1zqE3D+W/IMZ100qu2olAb5tytKdCCfdMtrjWdPXYpfvvKJtbrt93INbI/1FuVinZ5LQGMBvulIN2vYBsD9QehCAUFqagIZpbl+D6uT+SQ87xN26yOMp/KsP5JfJysRBMJ3Zf1CuO0MflEYYHrQmESva/8Xwv4a45iwOF7B/AFFw7KZTKvpnW25iUfBfsljvkKdSdtryoy58ls6ELu/CEO7GHYfcQ+CqVK4bk3fsMwnt9RJurA3xOagcYEUa/JHo2pC/vh5+FcxF8jufm5o7iOQ3U4+bGZH5D6AGvtynevhrtWTuEogMBs17MQNka43M0zk8D3tHQW7df6yWiGVL5fn0nw2eNIe10J2W6Gi8U9rPMWbiI01112ButXhogkdBDfIUUnYO+aKSUyl1p54kCEg+wT7HNIzKBq4l6FnGQ+WHgcqJoZfVXICGG/g71tFgPvhSTPqbPaEgk2uJj9pP/s70IU5HClppy/BHnOTmg7zv755aD2nPqVx1LdA6tapcNfeKF7WP18vzScUvSixD3BIhlyrxkCPAQvJ0O0IsMQ3XALlsG71XGKd0TH/Je6nfBrWRj0WEeWAfbXPPTEczSzra3r8P2EfLzPunSKPELWglxKRj+q2pDAs9CntmfGCazUREh8c5F+23L3dNfvr3Hps7fB0j4cJULB/2k0EAWCvM/XNnzyLrAFinPg3XvcqLg/Zam0YMexbe66pW2+reA8h2duJYkLkvrvDABAogBdNfnj+6yfyq7BIs4akLrB7YXLeoNUH05q0Rz2+gnCWefjTyJyVRlC0+0CM9nIKsi2e0SdVWvuVFvQQlU3yx6fZ4/OT5APmjfKyOWbMrck2Lq4Heh19QGPHmcpVE9KW/U0kGLNPGn6VB+V/K3orYTyX1YW9O8wxwtGVdyoSQbtLsyzbuxfT1yU/73JP67Es2A/6w5SuimeICVBCeeIK0GzSnIXUbsnW9hmyRHrE1R3EpFJRHqHHe6hYq/+3b7k2On179ql5osbZEmdncGEl4cS58f8+Q19n7rmi8TxLw5C1phNX29165sduZnXr3sK8a2wbi98zoGfriUeCH1+NcDQcQaHXxmCi2mYxZp7HdXkCNH4C8qi2uLBmiZzTWBWATa0ivCyiFwenL8qa40jPsdQg0lvMpU/ZxSci9dJX82oZL9pJgeqbTEAChxUDwBYayX95ugPFZJG2HRB+7dWPGiWYJSDkF7DW6sWOo8fRdfFD0/sRZTgHJHBslchN3XNcGVkJuZz2vzzw3POo4UXbzq9+QykQWe/BXyIZYFlhsBR3S0EtzYuWn2CeIfUYBZVyf32JoK7yx6NAKwTTlxQvk6p5r9H+fV3/20hTq5dr27S9I4udT0ymLh/9FwD54NdINCyEdpY2hFuLlysSxha0lYuw2ms9dNNOOLjpvVtcQVB3T0P9oTJQWsmf1chSd/7lxUTv6foT9cVTWoo9IRACGyn+9SwDlK7MUeSdG7tI2j4q7L7ekXc5/Fb/XWTLnQz4Sw8iPbYFRwzZJxR6KwMzSZqPmr29yq9cu4X2rvZXF29jU/5I2CR3vQk/1cWHoYPII4CHM3glhce+zuD/AdkaTWfEfgXdNsAgPUP/0DynfTD+9s746C61xX8DaflHylobOmzhPQWPkRtor0IoRzrRL3Q7kDW/OANYy++5AeSviNr7PviIF8QWsycB+56/9F/GuafupvGidJhYU0lf/KaL+tW+7yzdF954CLRzGeIc+1Z5jZh24JaUKwkfOXPgRl1vyWLzgu+lMxuivX7w2WTtdrT5l3ULzkNgso66rf1MvjEL+/soSA3Tp6UfqmfZoUXAQ1BUyevjmwfi+tAFtoxK/PRAj/ziw7jaeG8ZvIG2wcAMKMXmOAGi5KGQ39szmnjjBkLq/dTbx8NYUqBzV8ZdyjSGms7o8VERqAdVg97vcXVBhh2RmQKLmLGLhg01rbnNgrQNT2emQrSxu4oJjj0YJ1A7XQdudCA7+Q1zlnTTj4NYHHSclCwXauiw0xOF8u2cc0rq32I3a124372ybeeYxqyRUYkUxsgq9fGY7e6Rzlmlm3fRD1Vc7Yrj8uiu8t749Jy3DmLVFG3brFw3cWRNuF4UBmlP2f8wPTHgqZuM+qAjmctImduCJgsxG5xoHC9xoQP11jzRGwON55ZP8sFrYJTCLfli3hscykX1Zy31MTrVmoKQf6AQR8jMc580AnElYbYvCRFNTNpe9+GF6E++wrYZ1WquBwTEFvz7l+nO8+ZM1dv2hd+hFVE5WCyEsImYEHkQXRK8qMf2X5mAPV6Tf/1D0Fo92tvzj99vosziDlN8tRKTuiQxt37RcwgtraVllXxPASh0rrBvxGXDI4ZTXoYH4ggP3LqGrbA9/I6W4R1E4po6qpkvnU49WwEQWEjSDYnBVyu6QERo0284PPdoyv4gM2k+RbPHmbg8o0UEb6EwH4Stqo+PbrEwTogjl7GRy5lFSwMt5saxTYPohbg9VExxaMIX50Kd8WbkbgRIjqIwPYZbQGlvyf3qtrWpXW7wQMlTACS85EK/CefcR/nV2zEB8TJN8IDB5caQOhsNoLQeUHd6A3dqsVRp5KAh+JE57sVS1x9+2Ykfhb8i1lmZZzD+cstf6DTzoTTd1Jh7Y6f9zwVBY3Q9U4WrVs6G/s1Z8VMrG7P3g3Lb0TANeUJMbvDi6bEGNAhgDaHOrx1wQ05+6vNHXQb9/16EGeCyNhT9gwZI48R1Zzh6/D91RBnFeBCsVMXgcOhy00PIWZPGi2XcFtXfp8jBtXiraYzpKc4um9lN/TG+GlPBbhMxo2TPDFHf/IpSC+s2e3r1T/2AF29YpRlc02IXS9+k+BuIbXtgjJ05RyodNHKXZ46+Ua9eSvUoEY7Mow7lUIRFROFto0nXxlxQ2rZtyqrm6w9FKzU2SNQseE9c5wpiB8CW5SHPjJg+oKYrA9OhgsG1+iyBnDmckVryVoAn2KKCe0D/GNmEttgrjI0Wtc8d/jDOqXHhzZDHlDV489Ct0UNQPTr5hCVmcRLOAMmW2mKWuBuW2Xr28ksYfZinsGozCl3NqAbZKkuM3ZH+9r3V/7KwIzeM0Z3T0B/Daf0BPQl8AsJfoocadh7pwCmd6EPlWW/Z9HZtvjSNtXCZ+6QAzi94y8cxMtO7s15TbVCqPEUApi7VuHWSO2ycDTErSQ3kcDMGGf1mx18FdQi/mCs5Rz3q5+C8xtf1/LlV0IKXe8d2oXORl6ccYQyGyOjrcj9cvrERET5Mw7pN650OZqi8nk5lR0Js9v9GRWW6n5/WPOdzvPhlF9+mUipwesxVO/wwJ69PNfloqXU256jGxK39z4PkQPnxUfmcyO22aLDDnGxhS8ei6c+h/bRFd3OfRg+U999fA3ouNfYpeY8Znmfq2EnMeAb+fa6xQb+NOMXbZFHS5C5kybatFqR+tk/FPApKsuJr6PPlYKUvsdbmcxRnraU7gBozMov1Eu7qA/rO0sb0gcyl/5HHEB6jWYr8Rc0A9Z8eH/NLQx2enfPUf1WU5PJNxCjJ6JzQBh38bPsNMty0jK120xtp3VRMQQ5ksxXHtPlqCGPDepgDMpG+rzolYjOxvV+lx8AKlDouQEYcmykyUq/5wza/TwuK6HvIIt4hYIPWrzg+cM52yDBaU8VwJqyfGFLyBPRjVIH6TWy0qo+G4p/2/I5prkiKiT9eB38SH2fhx/B47qcVZq85sCaK+U8YsGe+yBjbS46U32dhNFf4ivZYzCOIBD1Jf4qo6PTM/kvmLFkBoqrbpo9qv2wmJfw2AFc7e54Z34r+d7ukYZ7ma7vy3J3iMFysHy7wy0yA32y4SYIXOcO3FFIJHGt2iEyg5F1p2t28N3ZsqGSRDTRsXW0qRxbD+EMqN7LtNAAqsI6LK+gDDoAa0UvN65nuvhKzU2X+rVSv9VZBvcevbb63vX1QqUeGQujQYM4NsDeemhGnmz5lo30NXzAEGtrboc3MvsHC0jpuIsbL1TL1MpcRMLknjto/I3s/O3VUYa57ypF67uY4hNBjrHIj2iDWKGe5WW6oUAOa6C+OBeCzcvlcwD30fk2JWtSoSBdry2L45jE2w1DcE2C3d/NOsFf4rqRQyLb3j+lB63FVTd3DgawgBhJIvp4+Uhf4HI+wVHNHb3r9kI+CJFr3+HN1wlqSkHxYOX5Ky7a9x3x/rYM+/vDyhAERsjg2W9fwfHv+eZR7c6ayQEzDqlX+g4r6+qU1xaZ4LHymE5iapIafVxHKIvdojpnISn4elsO8UXgD+nBdmWuCN1qHqIofBNmMITw3Chmzb7ohrazv7xy4Hr2drN/bTLxh3h/I1NESjTWP1GGo75lhvG4ds1K1AhEw58/WuoB8pqXJkST4S1mGEpUE/B+K6Kv5HvdgZLwPtODDOIj4godEeOKj3Jy89KU/DWTriUN0zSNZcPHkYwOVcxi1L6r8inn2RLlHUSQyki1Ydh2pd5Q7h5FdGDl7S/L7vqIEPPTMlng3OX9JuMeLZII5IDqE6wIP2DCA/M6xC/8+G638PLy/ZW+/+7K+7f27fKPwd+djpFsENeLpiXpqYv7UTxasMogrKvAi01VN1fbwxbEL7himBdqkzDDX2gABBYBIxYz9i5JlQuLq2jsOVSlW6+OhDnOOlCZkocfHrMV17vRJjyBbisKrtuovu6v6931e4nIqOeEfTDlNRIkfMr4uiOZiuK/iyX9OnllayrTgP2pUHON0gebp3RGXC4QeHf/vooahDG/+kx8tEjHKNgEb0sfJuWHHh6iMPpNBVKy6fWIcON3KUgiUOxe5NY2xcht4ZABXNUoRPVUTDtq0D6+0XTnDauB5vZtHMRCEVNc4g712C/pZf2IZN4C5S/PpgLzN4GBOQX5Cmo1/fDyw55unpqVyUr7x2NE+52g5HnB9JAmcdgKxcOlfaVFLsRDbhEL133fwl9WaXAbWvhUdOK/pYcPZEpZHHkTBZPV1KVkKeHPy4xE1mcrPqtvVlKro3yLlyG/Kog3g6OnqG/beF7ozi1kM2424qLARRFshZLsbHRRmu3bQTUKNB0ZhjHfoCbMWAI41m0rVSpavgjsTVvZ6zfIwBeS168zKfReUqjZVQYkkEi8XA4i1ARgxLKtKa2z5CHH+r82nluGQQCidYwRdAjumRnLRXopS/qrBEw2sPurH7tTexix7/ro2id7NbjkV0cQ4H6ieCJABHdIQMwhfgDGfiCdPFE/f3DHpccUmm1kEWwhuUvmzRWHcBOmLaY5n4jUL95h/AqxxugRoceneR1v+05l9/R0gIZgULxv9DcljnaB65ssiig5QIEm/3U/PtxcrLmwHmc1jMKh6qC4L9TNur1YwTOvvduaUx8b5KbkV9vsRb/EZl0OqhlGKO1Q27UpbdoI37DDkG7JE4onthMR8GkOy4HtRJvg4w94u2XuN99wVOZQH3KIOiRKEEjvfCrd6o3d0yjt5O5iZ3e+6kPh2nm6p3nMZio+vuGRhIwvGpVkiYXoie405KJSXY0fOXBn0tYOmqQVbVZERlM2DMWFr3bBuwav5yXTK0nvTHbH+ZThMUj+sy1rTUf8WcO3GpoMd6NUfhvBRB1GQu/Zl2QLMFEATEe5niU6cxWuM3cxtnwgpOqaykYv0RtF/cdfqjdGybpRsek88hLm7Ao40o/xwXRW8My/K27w4qim59dcDMIHFkKx17GEZ+BLe9iBy3dTwO0LXejy41CuyH47nbXwxF59A/algUzYYxZ3pdt29UlVX4Yfu8kAp/O2aFT5odCQfHiFCwT6ySyY9fZ7ocZqUMyEM81Wg8DtmhwJyZrDA2mJyDAMKb97R+Ye4LkNkPbC+iVt9IW/80h1fk1/2A2C5AS90ZtP03ERuzOdkDFbFiQNqCR5aBmaFo0Z7QlL//7remkTif1QcyqHFVXE8hc5ZXy3V0k439XNpX0iW+LX23b5ED5N3jJZwhDro/j8W6/gL1zFyP+wPNznCMJ1u2ovO2ZQlaVfpjiPokN1onoYzmt4NEpG+7vRXKZhZZxuhf2wGK/qg+EPvANrguIR98PGYhmPOD1kNtdAkXRsJxHi07RVXRAGbTANJxVCbEOM4yauF5L1VuGytcoPXy8I5zSIj/fazKo942AvA64M4bjaTBie9Qb3s29ZOuaX5wkKTK3HMN3GuvdtbPBFZ2MqYyP0mjBDE78rKaJf2YJrZ/pN/4z/XvRaSYo0arLDQPSNgTvX/ZLOoPdZ0VN3ZGSTR624sOGP+hhE+m763X/JLK3sStKsxgzPlu/1FMoGH5Tf+junCZ34sMNJcmSLUXm2LACyducVnyiHrllLjo8od8yk7dxR6GQcbR4OiF3i6/ouW/ybuZw2nkN4OGy2p19A+CKyALtuKsTpO5O/Pfew3wVGxSS5V9JCY/pHYtgXz7RbSFFb0/fSlRXpa9TS9bRU9HF57IBZ/4tEcuLd0qOd3m00uC1imdLRfS8t9bBmyAxxFNttNvSlQoXmamHX5Pq367pUxpbf8xHkhDmaLToyu4Ef5tEy2r628zRtTeOcdFSJlIxQ/OWfP8UXlRxxUxqM5rm8oIFM6ui5b+m1JJo0XtUmq9N7m/DjBWJyzxL7uMdWH9lQkwv1z/iTddBvDMKlRaqOFS87QpoOmqMP2EIvb5u070FCbdrJTozKEYTJLFLhzJybfkXLyEAz8qbRxuJt36zyM9z4+GurHFuq0Nj6/Y7KqjVSFSogvzCIcxFHuYVHNB+Jrji+YYIZH8W6vWKP0743SRHvLiPxd/OOM0WKkr78mxfzqbomrbwP11YHdq2W/RGCpFZ+10M4WWrdJQoYgVElz9GDmFea+qDhYVSg8chvmLqNuy5ItbXjjnjIsxY0+rfN0rbwQknrdXCnJMN7piawi4TIuRwhufW6sGVmh8Bmt1C03m0wm8b7kJNJUjAfi5vR6bDQ5FTqvLLNae9YDeWNa3sfE9cCyUz8SNZH3P6rI325//lfrXc0SMNm7zUa94774gYzRaE6kAjkhMiEzrawmMW1Z5YiMr7aZ0UZ4NAhOhVrhZtXIZm3VcfDkyYld4WbwyYwfdYInA72JcPb/ouYPejy5wdf1dLh+sSWpgFEg9EWfxg1DoePeMCyjKandTf6xl3lNCTdOVi92KykDqVzO/ah49fgG4InvkC/P/4btaj4KImb5+1jSJbnz7cmlh/O67/R2rdXIaF4niVy/FMM+9f+Rz///l7yCqL8djVU8CO8bgbg8pFL8AVrcDFilhk3QVirMUUZpzhUbIPrlzpUJU6Bmg0880euk8Gnz43peBHe2q4SkbXDZpaiwo+D4cdwdy/lfWYuS1surpTB51F/aJVZMDyy6choMjMJ03SdSrgIUzvwwl112s433f8/J0b8qvYgDibFE+hEnbSwejMnI9Bpcj9fAo047P2Yl/FR2L+f+e8mP4oBkAteNETygj/FOMga/EmOkWBmiOawgNh9/eNVfpI6PrFVbtiDkPbst6tO1MCZze4ryh2HbvPbL1x+jch3HinNBvEPJ6TJC+1jYq3397bS4ZxTEwp+CPNwa7v3wPeX9K0tXc3J3YMy0rR5hrmp56uBlB6JHncC2UNlUJtC4PhuQzWQS5lhaYqDEXpdN5guvJDelPgXGcXKNxOZ4xBeKNA1v7RrX8PgBS9/osl5HkCmQP1vV9q0bRQE3fxr2oDqZMhxn9tgEeLqTjknsaXr29XQt2pTPQ8BwdD9UWhlN3uImKXBDwoMQ9PGpLf0vXnH5FpYxcUUGnNED3NO4VWGqN9FPWHamYFGoZ6fbnvqEXkkwYCs/tKz+r9AS//Y6D6MIbAxfWURWEs7Hy8KUAcMbS0B9ntB3PY7JecVr1hNFoxHt5JP0n3MFhZNa8/qrc47ktMGxOrOJSroIWGTV3S6Q6vdqId46JyjO4o98qb0mvjQlvTW+Iq9FTkY3ggpg3wo943qu+tdFjM2oqfeZ2KDCYSMd+sBW3IeG99EcBz1p7I71tS2hxcdfZiyBdnxnDvmvMnwXCiggqGdicNH4ecQBhgSqk2/Yl1taM1mFq00qdN0b8ePL2KYsOHh0tW3PKgcXGiUuJcYm5cd2i08QH0+TUNz3VJ0B1MXJdU1S5pAtXeql7OoZoZZKb+If86yLFa/Ie31gjjUlktbpjlQtKZdyn4/mvzm7hjqQLFR+p//Vk7MKImSum7odkHBHIi12FsWoe1Ce9A7ksLG08RUJzhCtTWMhPwdy5ZcT9e87UcHnoc3/lVmcdppQLjvi+RjFBVovJPyLQp2WDXFR5q/xOjxNoh67OjSM2/vYVSJA1WoJwD+RP9E6Bc/3JJjP1Ht2yvaO5oX9Gw4q63afQKK7lTzWZ2kN6FHTpuc9yyEqN0HnOY55FgnFihBwkPqJhbYhzUTtQYoZ3dgfhzzEtyQeAFu/cFbTTqPVKzEL4hQhZLIqJxeGUX2aBGjON5mEwZ2W3n4dk3CgRbbqoAkXu9q7eR2E//xu/YUraseOxtliKSRfvo9aHukJ9DV68aSl7v2IjPqPWdjcn2JBW9LJYGxFPg6/Si1IOUDzDxgh3iNkfYLYPTMbbvNA8FtvxCLXJOW3rwoGKNfsx/DPrjT2+v1eeQUwLe1EIFlh0cYogaUhp94DeLhQxtBqBNHjokzPB3j0o0+InrYnW76MXChqZs8fmgKdSlzGyt8AIiXzW++C02+06J+eE/pwldr8p1WyHePVC0XbTP4+qIPhdn9xKy0s7LPdmmbqz+QD0Vd0zRXx+tw5zWZHQK26k/KAou1XO0mdRbDP9WkfGDKWbw2yif9KFgx/fZsoxxLFE7NCz49XBvld8m+oQlkIM3S5KwGRnZ3WnX2oDDtUfKXLoORkoC5KgHfsXOAWM239r79Y0t/3iApMvDjgEXltUi5hz/5Fv68hxq54YgSI4OCWVBgvirPIvbEJqCMalcgbLr3kDyMBQXpq4/vaf/68TeiQl4i1OH8Q4TX12mSPZ6lOTkY0C8BBBw+TctwHQNEszf5U+L3HK6vheovV34f72R/jRNx7g6ljVtgoPfJPPwJ4/7Nfjde4drB5wYXlzBgQe1Er9Of7kcpLqKmpjjZZKYrgrfKQEwaGHU9lJt7Vc7n7B0MvnzBUiUYENtMVpSjehePf3w9/CZT1NCxrWIy1+VEgoBN6rs3WqoGVBpDH8m1KBWN5TNRVi9VYKbRUWm4S9AO9eQEN6YqYVEjeonz+4YlB719KaEsgzcmw/7l6W85TNh3/mkQdtNrovFB6R1prMZOAzYONBgihzQOIrjUv8ldw5734ZWp9RYASI/ET0TOjhu58WJvLohlRohVpB/DA9UXPGjqjgkwsodXe1r0IT+UkQ2Jd1qZNc6SPZ/XuqAJkUzK6DrT04bGyPf+MFwjQgiU/iKdkqM68vms0X6tKhU3da2g+i86OYfxRQVDsK/Lev9qhX9TgRd+WHT7tlpDABENpmtz9hbW7kxgyj/zXw1XoMBhgr6OX96BclDOajlXEtXcSR0nKOvpPTF7tos9ewGEoDruF1LHyynaHrs0RtN+NMscr+Itj5WoLROp/nLiWi59mtEqL7DlpQ2Zpa9geD0AWugKvKte3oGYVtdWbuTAH4g6FukU3VN805dWNtqLm8Nk9bzxRBKuFEkMLV3vm6YduHACuP00Sb8655qpbCgPP4Xt969mBUBQdtm7lGDfg7WMqW78qPA+PpX9VOHaeuAiZf5hHwzKJm86pq973LMV8xf2o5X2xKc0j8TomMJ79F7OoUwXeiUD8Ongq0l07028UK3GKBjcdI9o8x+kQgQkvl8oWNOPhm9bjLoFZtVUWEBW3GGb8L6RjcHFBSxAppHa9urZudceL1TfPDARDBuWfRGu+Dce9BdskQl22luQrU9I8qHQ699ZG7Y8QUek7CCJnxPlBahcjzi54QHND5iz9pvj27wi6bx1lqjnqj52S/s4H+vSaxCIGlhTkF1xmL5VJ5FHwrmpA99kGjIWV9RzGrD99OEc0bY0QxlATR0zm0SYtIf/SlHwCfGuu14O3Ur7NUwW8m2uF4r78FXoXZQvJwuCtyjqbdoskw6ibccLmx8S09MR8fPxWywcj7hP/u3nFKfLdvihUmkvkUJTUCNFR0lGUCtYVG8jyBcKDt9vnC9o2E33ZOcMRz/Y0Go6/EbNw26ny7MuSwYMhFp6sYdZV3u4uhq26mXobzKBVvfLnrzGvV7v18tyTMblXZsv3D5EkmEE1x8UVjmCG9r5PUZbUK3mz4B9nlkfCiaoQ5/iC1p/9UtMaL9//ZF/O6M3fb7HOI7exBVlC4xsAnR1DyP8jFlEzPEnFx8lo+KEeT7ybf3Of30Izzs7wwg6HqjnhBHeLZyJx8OtfGhqC6OpiRXAM1B1ztM7tS2t4Ekc85vFXj12ppQbrZuMTOcPs8LczSHivdW/FEfmNIP4w8tYgH9nfyJI+7SjErKJxrq4X75UpgiPoEjlyCGzPrtfdfneLnpGvhq3/dYbDezpJT8whHxllLT473bn6NeINxlNjEiEZ+Br+NxP3leC+Ns+BeKv+izzsvjxb1oenRLApfxXIMX6KIpLMZYnihYERaf89VRS269DOZzu5VZ/uTonNCPjEi2k3f4Qd/X2KdtkhPLeB8ghfoahhj1Afk5TEDki6SRp+M6gVVNSZRSO2+4N++3DsCr0cQDHNhOgvoPPBle8T/raz+JeUFcDvforf1hS3LOvlH32mji96MbCP6Tn3yC+J7ByaBs1ywPqc2B7exc18/KYUCd9a67Kd8Q6oTR904+AwJJwg6EJO9ztV/pREVq4PHaaB8UXRC3j0bF2UNNwVxAXZtQzYHqdH07VVIPvVIbWJ2YCaKH7+9Nb1CcVXHlMfFBJvAaewwfGu091el7Uj1nzp5K9LKIhcaZEV7pFbRo/ESCMDrKlKPpCFmTde6Mv1uuW3SPf0HYwgxtZw/oTsswqvBSVDhUzxPqFnrbsd/1IE5KfMCpubvWm24XRx1UcjoqYlbVNsxsDrsVwyiAp0Aj849YxEDdd/vcKYzcEnkfs2CQTzd9lAvJgdSdnnyhA3Ty533FBv9fCd9MGScJfndYL6F5U6/+PvffalhRLskW/Jh87B1o8oqWDA4586YFytNY8nG+/4BFRlSJS1Kms6u7bJ3bm2O5sHIe1TMxpy5YZqnJKQhLEBryf70n6oMxHgJlS1b3MUNuc5zy6+4UZfCsAQmQ4cO3gOrPYXAZDlxEpClNnDJ6024tOvFwNGzUyD57pw73NF6IjsfWVrmFf+2KI8A69W50SMzfAbd+PDjzoKXRcTDGkNCtJ1xLa4QqZ4KfDhneUCvLCGcYBBVngEx7Ak0zJV2S2bDovt8XWFoJJxtjCdRR26oVEfRnhYSGJnBWjWVcL/czbRfFx18MlQk1kKb/Mh0d+45AmflBZX6CqpjUh6K5idZh6+hRrpW7s6bL0F1ZdE68rC2ICapUg39h4R8g2n7km6CKm2KUeFcAEW6M8mUTmJvS9Fqb6QKQNULESKKR3Wmc7slwMXD2li5155p0YgYKa8pYZhaQ6iaBzJo7Khy+YXJYxIrm6BIxHpx8OYAnBbhaoGUnBHh2m6CmlpXGBYMRrwoARvJvV37VCblWm4f0GCsvaPJ/O0lu6f85CcbndtDSd/GWsLupfTBfcI45rk44p7gYUPIzRiL/k8I3sXIonBxho/TlLcI94kNF2Ee/rygzpk3dCF7X4ekui7RQBheAUHoZ7hkYihYVjHWbdO8yJ4pMxXaSTAy1M+LhkQFGmcAlPPeGpu73J/HyVtgyLGwaiOpQX56eZrz2eL93oB6NB4R7ohQq/I0ojdkGx5TJ35+xC7BSZvB3R6ei3ZaPp5+OIXo2L1C9ajk9Ch6J4KKTHfNrqKD3kVSfV13E04b4wtkRlyJPtMuGOwN6ew0TB1UPRum76bAboTLA7B3bGQMt5YXCJnHzfPaoaG2m758voJfsAvpTLuUQLfh/9wBaAmgccNC/oPAFvIV56rE3DN5RplvJYbloKBzgi3lGIB107NxPUNvJJvIgTv7wFaOHkGSQzePMzUGjfIKZ3MPCQlxKe1DZ38EPapZBmEuetiX5y3SWZOGg5BYa6zKw/GHxpm0mlSYdUj0uSQ2bAV+TTbqbWjM2XmJ+6PXLNq7xd/b1nBSOpR8MdcGwHhuMG4Vn08fTJCPYjNp54sJsgj5uPQq3JNVQw7xRCz5KCtKnT0cV6KzvN4UEzFZ/Wkn9uSTJodSskJEkehEgYz6d8iYTpc0zcN2ZtOI6jSzi/J85IlBLvi93Ww/kFVxRyoyV3bJI9boOHMxi2afMBH5h8PlK2KynH4LmlDXplymJarg6uzLc0eOGIO4EWEjbiYif8A6ZDdBtskE2plSFJ18R7Ip8fAT1Gr85clypIlyqJCHvx1viDv2IX7+HQbxqatHBJW01maNO2KGp41sQhYU24h6VVThpjtQ5mhuoO73aCBca0eUfD9RS4p2MChd/+87Mgmh2TY1TVNTHryDgXwWBGpxLIAylU54yx5eYxb2SRUW2q4RN/fXYwCD4/D1CfEVGpJKuWz+kKw/SZIoBRDENfSeDyVPmp2NLneXeZhg7YJcIX4zQDFR2alkCtwzp7S2t82veUlsEn5O+DrlsiSpAokUp8jqSiSM+hhFVV3V4GT/IzWWxcQnuiZpu0h9GtMV/PDmryRgLZrbzFEuRodmiFyTScUHxJCEvFJVYqzbB1jPLIuUpwa3wUffVuMwRc+lffaGzOLs0A8FrH6+C1sFJ6r+QGE/a6a1KFdC/bwFS+jcYFpB2cGhzamLwewfESJQy+bSkcLcB6tsvzQ6BI6xkzwzNmbyFxySD/LACM2Oqv4mcHgw6mdhp+IPdtwUH8He9Qkx1z60RJCKq9EL2smlfVqWDVx4GtaulsBXpSzJlhus31Cw/dEa+LwtzRrPLGDRfE09mMCNgj4MQ7AuiY+SWblxlPOboG01UJF0vnGv9040de6VBzgJ4DJtguXDSaF3DsVJLXozhb3Rl2iwqxRHBxM8hdfxOou17RjSXfT11o2zs/pbmnHNUuBXn0oC5sB58LstLc5njLfJWPEbFOUVix7+7jaqNAEEpt7Ld6Dwt23iEGnaEUgxiazfShJPKcN7XP4MAsSRRk0+Ho82AG5fIeZvndBMki9yqultGsqp8+QeO9/7Zi+zJpoFur6o0xrAOyYFKoGRJ9B+kmem1OL/agwm0p1Ev9BW8vz/RmAnBusjj86ODZGy5TltyFZWnw8hN2ejxuGBGZd6ScWDEj2FB3paA3NpMNagAX6XGw6Czj0btjx+sQtvcuUL6rxGEZyduj6SJaHpt3QwcJQdPm1aNbLHo8kEr5DAUwMus6EqlBtDI0pUy1KMIg5+Vm7DmQhb6q9S4oh66hMM1SP5wvleQtMVOlDhKQG6wydBFKbBiUstDtwlMsYSY2z5na17f+mI67/c+wrDiapRe4qqE7yKZjavfMOS0GcCDr7/rLucJ9tpgEgVlakkKBaAzCXmvsPYREPsjaqDCemMtUkwndK0TqezyTiPHYGRyhm1Z+ejLCsjAl4FPYcwxHwyF6+ealTcd5XxnULkGaA7ub+/pNEMlCPwmS3tvgfVJtkxC3jxll5xL8G376qdGbBr5bHgbOEOigD7rdGSFnbxr9yYS4PdE2wtZoviPIgGMdzxR4aEapI5Mbrnmedl604PnEwWp2Outlh8R7dcNuyB4YMrkbnXOcf+ey14NpZjrjK1Tf89O8T8udR0AD3djV6bM9ZAjxmPt/UqqM5EJ6UPwkHppxotzFDpuh2HUWMdnnXcPrcZ5PbM/E2pEr94KnDz23XihJkHmrQuCtKbh2vm3Qml1KE1RrmcFjIustbZVGCS6HdhNwUPMvxinX+pEo9umGKurmDEu+z4KReS8dFad2nECuMpIFP1nrxCIKQnN3kaSre38yr8O8JHgvzXoG9Qp8kFH0PmGbM0rt0sDs4Zu17QieU/oucCbKPeRhJzgqIFYTrsBv20iBG8rQC0gS+rncGSMzell2OT2c4+YdzR6eULbsl8VLB5yFPQ9UX3taJBY738pCUw/Y9C79SEBodxUwHDUn8UDo8ouLi7lqpIb0kdeKKl04eRIoANFfF+e4AHj2wbLQTO9O++mFplPURSKCkX9f/mrWsNNKS6YZhOltPo1UzLf3467ZGsVw0ycuOd85MFY9TDVEQO7KFJIPQo5QGWYY9EcKLyonDylcamwPSgNW9uKRj2/yE8NqlgDzQsjTOPj1SvDU08jswiLB0Nb1TE4zd/mwoC0WE0WcXry4XeBJSyMBDN2GnNdcvO9SHCIAE+eolhYxzguFAcM09A6jp1taLSiDRk4NBndlXwixjryjplnzcOtwlzBmTXt6zp+iszAadDCy3Nb9wqPpvbHzLd2L2+czI0LsBrLHxMAG3jG3hUqWaFrc9xs7bLbCF/wGYgHlNp3he4bULCoYDMTAlVatSJPwihUYts18qXddM2JawefYk49iIUSVD9R5mTQrGMI9Q7phv0sxsny+akfWJ1/6aJcZqQvUBkiw2y7ky+ndtNXAS3F2YoMyxDQKVZcQcLnuAcnnezWSf57xvavDfn/pk5wXx12JouwGFoqgzDMv8MpfqHuPzT2akKllLh2IRQ23douPZN3/IhWv6LgmYmBMSORivTAi9QJSJvqIqO0JXWzTj7T0YeaCK/tN2LRoHjqTPrNK3wAkr05pqHKXM6b791lrzieX3QarsmYs3QagTKJMzXNvT3qh4IfATU3gQ7Ycgmv6Kb++JovpX2bRXap8LsENZxvriyP57HPHe//y9ZCavm6mp9eJxOROdOQW7n7ljSec7MnzzHxbXuVeaJbBhS+imfI4JUm7PyZ+Fduyy+tmbTq1nS70qhUv9TFKEwCCmjdpAE63wcA4c3shGrYqrFybYO3I1bgFTx8f4pnCvNsDrYhhU6VsgwGxqoTIVHUYhNbOdtanARd9M2+Ou2/qHURsGKJZCTRi+Xrc9S3o0HHqS0e1mC2qcTUOvO2gjBAMWUJN+wJgpjjE5paRR0A3Ad+GzjsIV9DztbZJpYjyL4Na3B0eX2NPrA0Y7No0g3gKv5T4dYwyM5jyxQbHRQ6j6Ex7l8BhB+g32ItsFiVi74RbBJGOQVXysUgfksgvRq/XetZqK7V1tYB2+sROj9kgQ0olZvgIo3QiGBxsa7CF/ZTE4wONjOV2d8lp21my8fmNjjyXxcfFM8yLfI9IMlIjcKAZ5taetOUxeJeYfVPgctAKwOS8HYTIncLNM11T2XFwHN5A9AfQnRUfW+e2+lEZcoad+bll7tvsgfybg/ohVybRhKs95iylFu1XGLH+4qidp78SIVOpBbJsxeqjMRmhBlnFokTx4hFUZlp3m4A7FgtHlT8ODzBUk/0U+nvxgIY7ccYOoGkA7XHZy7JvNHrc0UXS0gB55YY2QYcEeFQdPrWVjiNjH7BOcdPF0buegeP95bT9Q3mQYD4xcUuM3XM0XeeRBPqcDw7aQ1rwbjxBYR6bgXz4SQi+B/cTLzOnOVCHAuyaTmlo5CDj62bVtxxFwAiPY0DhYZAkVf2ApnUqB3HUBxqy8AOlKCrHY5yBR0+fmhYS6BcMNPNwL0G9xyd2scHrlQW0EPcmxesRxaEEk9J+uQ+Aaa1kPXZ1enpZ0CNOMxZINOTD4YURcHHTadhlAN0yRb2UlfhaOyd45SFnBjVzx3KRdUQKWCCladtmUhhsVpmS83KUN4Ul5hPNYnFVGwTR1ULzvNeDPrRZKONej3pkI+Xq7nJ/rKwVzhE1vcuH07i24ztmgJepie3U22kuWLF7uoy+2repVRdaMo1YzPF7yl5ozuvyJW8Ra6mLVC8oPqZzWEG2BETNM3Aqg77AmFYBoALJAeZU23mnG49hgSfgBA92F+I3NfGLDWHA1nm5mew0r+lESVl/Xsb8cZJivb4KOXQtyZbgJYcBlZHNyX4AOEUtzRFfXpNNPdkwruFlcXUSk5WfVFhHSfhdIufUMXK/muV6cXbD4PRVAUx5xTGSLC9ZevX4+nonEeY+lMoPzbprmhO/HJxxmQh5gMsspx9BqYdur7S5gSNveUKsuH2VjxrZ2tuaaDt3N7h/4CmghaZ8LuB5fPJs13ErlWcpIVKHPNWjGn74lpbmB2G4itXFUy7QnFXWXul0fuFmlgpKsp7cHZnKg5L0YsIFPXN8ehX7C+xzy5Tvd3SfL30IGvV1qOUYEMbkwkHPA7UXvUTQ/RmSpdwdL1o29LEA8krxV70CXndu7UYKfj/goCyp5aX6qV2NhyqkoFtyodkFLKTXAQgiNBUWkbZqVT/SNXYoGb0v/EbVBE3OVqT5KjZ8Qsm17FQPQ57nCdqeFiu44NzgY/Hqd5KFs8R3388sxBVqQZiz2Ztx05R7dydd8GoXD1qFpwX7xu/YZ7KOrgsMVpULdpx0J83jRj/3vTut2AfTU3SI+qsfcmHuXPYX71/DcLfAmzSXjl6QIT49mshVVar88wixxntNY4EtyycB1ZrC61aGS4S9Gvyyt89/3Pmq7wcmWrwP3vUq0AsLRe/3c0nK4uL0JI/s0kO8/QlNtmTxsHdCmaXT3IK9uvMtnhjSSR5mE8BsozAY2Y3wwI5ar6uisDk/ZLwweb+gbI4MCFyhcFQlhBQkAoYd80JzM2b1RV27ZueZnwV9N0BMmhJSHNQDrRLFQr0MtA0RsCvJOnv4yGWM4z6ph9fDsmvJoyO8TWJ3WrZwvl0wV5GHr9ea2p5iC2KFQyPimyLWByW9XiEcDzW+XbOC1jB3yJ2zYKl6mcCLbV+uSoxgBw51XDhSMLB9qo2o5x1Ur7RkFaJz3MhQv0FXdpcKnFNyP1pYbUncHjNWoWZ3m/2ji0yXQtnLfY55ssW4nl02fk8fp4yZNe96PJjG8BF0XB1x6PMFlwhCCV/04VmtkHApDU8mxJeVkOenaXPievfqxZFUctk4pnPN+XxuF842mEs8BeI80qfYMl8odoFM8MjGn2bfVaCeNwuOp3pbXexT0hg5Hz7L88q9C52G0+RDjvtIvou7appETiPytLKJxPbqGK8n0RaUF6gIWW+UOoIYGai9nwFM/YQI+Uj6flCDRuCoG0EyKsPt3nBxqPll9E/eYKoZLzggxyBrGBc/aTPDsx0TEH2pyT2dVDAAJZXjxQ9jZbrrORDd0Mu09mDPp3Cqcn5w6QHeWSCmNkhvKo7AN4y/Fy7k3gQ8FBCghnMYdon/XDKHZ2HqVEstLUVzIF03hdokDxEKrdE3Ou0hT7z6ujQpaFKDVeMuoDbP2HrutizzD8mwADrB7t3bJn3XywuZZNZeclCnHC/xFIiQTJYl3NNSFa2Xub5fJlhOHmO+GTcAuSkZ/2nxnMAXs2CBlDPfC91P/jb5O7iSIfO2x2K5HcaMR2oUvaba6JIpAsqzJVbJ6lOG7S9wLdxzvqnyVKgbjdw7K97cFPpj1QViJ4OsgIBCwHk0llSBXV86VG/7BdSl911iqo6H5uR6bL1lqh3z0tQYTp/vJIAFG/OKHgmQy1KcsIvACZJJLzMOMsoLZ64cwR5VVA0GJd8b//kSo54uI27P+/HE+RXBEYGtUbrKtBtDz4qC9M7ZiXenHLi8vYM4AzS4aZdpFykMODlseb+ezyTO+S4hWF5yGzElDsru+8Imxx5WfXcZmuGdJHiA4ZXgoM7G9/vrqGqupOb6wKYLNBvafe9OAk89s0ah2ph6C90ZDDTdx6ovf2qCZP5rXTr3sb89F/9sx9sixeCqwEWsC73l1Sk+wK31x0WbPH7Pv+xwKvN2x8zeqkHH55qhr2SfKt/3ctNFWkqYZliwgoPVrAy0by1tZcjypKRuZ0RtbYh1UG6jeuFYITkfy8MDYVfD4PURIO9HX+tCt4tsyFlhP2ywgEQvxhUIOa6OYRgqt3z77TCDOxdgwMTCb+d8yjFxR+wvFWYk33biZw1a0N24/dHDz2FopNd+5pBpsl0WS9VrKAcrI9I0RxWl64gO55Wcp/1uEEzeuXQUwgNtjEa7dKx+83mH8wQIQhmDrzvDjERwwbiag2Bm6AVhx1qFnhrkbCA84sOxa15quDpFT6ZrE+KQc/GVu9K5rL+4LWKpi7chdcEsUyZvCGvckYE33wSOtOv1GCugTcH07bD4hlk4LA48ZnmUtXOT2CGwlGWWmTv8ArZNFZgWVNiWBwBAYSWJ9erTAgWdSEwjrn6pyXjbnRPNBVrinx12r0o4dE4ibpi7rQ08NHhsuCM2oYSulS4qkZ5l5BjEtJ7nasvufQXep03w44eExp1wIbhn5ZhcGEfPT+PeMD+2nheVybO01vDb+NkiE0Aa1p0AGuKrbSz87BT0rJSqW82omYss4Bsf+3yYn74zkfiIS1a1lAM2IApzcWwM9vOuKKTW2DX3IEK8CbTTHtH5vLdQV7TLs/dUhxdfG4Y6GOvEN0Zau1MSpWr3HsQlYY4/YWBX7MNWMpWSzxPyotLzjgkvaoN2EY+HHS5OCuPWLhZ3np2+a7fPhuREAEEoyN4ZmGpXlexY5eSlmZESOFa6jN4dheZYkXw7yGyC12OPluU/okTG5lgFn8+TurmlWJq76RzvZoz37qXdok6k8eNryZX8XhxdxHtHJ00AnGheTviCgEgsijSRIN5ttWaz7SmqKAtO6NwIeqmCsy8QQ7ohb5fpeiMgGVvf9lzblemYZte2ptuHIIxF9HhT8UCv6MipUKSEQ7YkjfuL59RBuxCJKO0kY4akOzNfZuDyv0E64WFMYZCCQ5ILR5e3IJ73gI6FeGPrO6K5ou5DyllGnzVrnlolXi4fbTgI0EefiMBq7WqvV/vw6Cz3FXN2nG3M5drMRTNN3NgbAvJPJoi9BMgUwA3Dh7cpUn+HKhdDHpfn8an8iVAAZ3CSkbthdCIw1t0siHbN1/58oJHaWyGXwrSASDEmmi/bHUZlqosVXMw+XwAs0oJxNIVRYgjldCZczHTf4Fr1vd7QGcarV3sHW8pGuwekT/sUnyfJjyb3ouSbYJOPFjDLi5JjL9DPlLdc8ecL3pShkh/EiTshUpaoWA5PnjZ5e7axUjdyul7synJ7m50EyLPD18NzPuCDzQ3ojNBoeq/Ls0BflSRIJnsxnR1/e1p+LxilK6HCdy7XJcgxlwX5DJsjcAe4BLcMJDflMKFi59ctMyLtUrrDzZdI2LlLVLUACYWdrM72fMQ8kex3vmKTShpdcdKTprQ7y5EPRYqA9hg6ebpUNI/hE/U1BgzPZNiMuRq7TEWAawFnxf0rtCPPtUrvDkEFmU7cayHioqDPcDa47qQKQtwnDTCGMH3lNfHa+c5jEgdq8vJe7X1IBdC0cYxs8WfJTGOjAXFH3h/R8YHecl1z4a5XTHTveEUXgYLEVSz3rExWtaXI8jDlRSxJd1l66BXkHKMaRcnMOuhVtN0NVKVBiDekwd3dp2R3S0brRcHcJ9ff9qjaR1Eyv6CtqNiWYvPZPhH6YymaBYVfb8AfNOihMuSm33+j44vOPAQKEPN7P6VzGZLKZ40YG5fLIQyQ6tR+HXm80unv5qMPbUTMVgwjqBPaLINVnOkEhiOOl98vIkkats/I4ZWdmI57wBcKmkvecMdcK3QDO8IwH1XjYbNN4EedldpkTFaTciQSbpAwHzOAxUfpZNfLPBhRdccwR3OOFK59Zs+3jOTMk4+fsA+2MfjZZPGMCdwfOClC3g2zmuAE5OW4tARky51lj5cSHeHL401+zu1o6mmosUAXZqDRry6nCEhjyiwNKlXb1j+sR6G+tSz7EiPA08ph6sPlQU1+3gPa2JBnDV6oYOYgvoJLArdE1kzcMeG3T1zeflRdPzbjs4kx03qK0DBtLxX1JcQKy7Fa3k8PptE4EZxlc4DVAFGvNXoteTuqr8xMd45491qJchfBCvAOm2wnDJHunYZNDLeSZEK8fQ4nrCVEcb7A+WV1c4Ll0qgDkpkfmt6+PqjtziRAx7bcsZ5J1vLFMYwEILCFIVom4IcVdOTkWHeFzLoMrLrRdfHM5XZZw0lijmHOF60x0S6ep2xnpny499DTyP4UZSW99ZPRiGAzjbpE1gafOE5WXxXS2Bsdv7ZkiWKzb1Kc3OKsJHhQ6A4+54WaN9THVHIwtruNw/mwPxXOdAyRecZTw7VOwsYHtGHlHb+uhICX2t3Z3AtFsDMNZdS8rBl0rnnlobup+XwQoAV0YBBqFgxjEv1lLhmaDTwHfFepH9SQVwiFQ0muAN+0TkZF7tkLSHKyLG7jthY74y1bZa28pMdxiptBMMXlO7ssMHtFm/hHLoAk4+BVe69CLstqyQo3zBxKqMNQPJQ7aJFwJvH65Og9C78rMr8kfOBilTbtP+mKZW0OD9nNZttK1Odey/0kTfEcEKl0fJpuXejZYXOJ45o7K74gf79zq+maVRSE9wLNnnuzKpIj3/T8mktSKntTTDn0UzCOd/iSf14aWq5p9LImxUHncGhL3qnVsDAdNRK4JFsuT12qqD08ZUvresZY4x4sCBZiQoktdapUpbx7NCYRA1Uu89Lln2J88wKy42he4CJ85XyblUqthlL/zO+aQvRYYxBNlRqSeM51fSDvetPCxJFSKq2cO11cwAVPDnHMCCozU2Q1np5CTEzr5wWL7TNTrWqsDcnd7ZOehEuXKl7NXe1y3byNFyBpoJLdrGQ+VY4MHOhLD6hyGJZzgOHGtMmw7ov9uEwCipwl4rvU4ssL1NWBk664qhFCGTlrFfqSkY5mxJ9cNTaP0n4Uzd5emKju90S0OAzu7MuHOdxjuquFyRo4DwckQe1LSV5Wo2aLVqq2+Abap5vY2ksDtpY75XLhTw99v1gWqi7ZvJc9kIAqLjeNTIelSJ2UmUxzgWnJc15UEgl+Yb5CZZhVehaekS6hI9Uc8iQmQTuGVM+AnHGAJ+wTs+zApuXKDT3aYHK+8AmB2/DGMfrUQJetprqi7FRwR6fjgub4dPEG40Kq0LLWPtZM1/08tRhMkyTRh4msfHRO7LacVzFXRz4UtgPUXx1l4VhEAKjfjFZuhMpljUNIsqn28nFvRwBB02AZI8vUIkQLze7fTwmyecHRpxozgr1JUfS2hMDL6btVH8dhiWoajgBDQxP7YfLnsMOkaZeX4oodKixS9FgYtwBE7D0ve+0Gkpd2smMJ5UBP8e4JspJ4feY48I5r1flyGwDy6GUWWLHbeL6LV5YF0Gf6xgk6mCvYcPRoyTYxQ/utZ/OYMpXYdZKqSbA7H3K50G46GtoDWCScxdKqNx1+SzxI4zhlLPcFji6uihVDEAQ1sumgcDhiKL9blgfJzN3D9/vZyTuQiDXn9iah8EkJX7jQglQNx25/AxE4dZeqo8et7717TRH/NFMZDkSJGEi/fNdrSMa0NgPDc/ZiniXWm+cNwzN9awaVEcAFLFW4rCua7KdUX1iwXTDlNnsR7cI3jlzQdaG0nhwqAhl7o9FUDPccCK/n1+xIzs4hhpnJgd9zrTCA6SfqAy3py1ybrXr1HtYWWLlRJFlDbtpXU2kqeWC0Sc/eURafVd6c4dPMBCZO0s4p7JarZGcl7CkyQZ5rSnZ2GgjWoCUL9Jg8ZnN5saXPFGB2UghnVTE9HkwS15wcxwF7AXJ9nE/V8V5Exqn346AX16coigN42eR4OzifyjS/eOYxHesyKRZYYCaetgCLpCVYr+BapPIUdiEGTheXoRpcNTALfgPEWH8y/cP1szx9h7T2u2Iff36p8UAzBEA80LXIDx5t23i8sI15R9zWUukgv0tzCtpU2cgvJu/Gm6hJQwgu5FNw1C2p7mRXNX4b5lR6IWwsn7ISvJ/COLw5uEzWJI6BaQuvQcGiLzIiWU19v9VDXPlxvjN56kssFhqfG25ve6HVzxRi0BRWaI4IfZ6DIGru3JUMrBbIL9LPANVxUlgQ7C0F0AwVsmbNcT4+JjW3b7OKOQhUDdx4qJr+mtHLY0mBnfj17NpHaxNI2qpiMApN2CwTXBNnQwNH8wAx8p2Byl2J9rLrT5UD90+TlS6YV82JpyDi3yJOhaMNDNbznsWZ8CI1Zc8GXYdwfeQg4GZAN+Wnx9wQsgcvBF33MrJNo6gt+aMOPO4y513VWDJbQLcWiKZ076UTDhTu6kkA6XCrZB/qB9eJPBjqEMduSorp+HIBMc/NpaZY0BBvBxdb8D5qWp+zVbjZukR1u7WTnMftWaqED3YLBwPvTccuG17QLIndQPj0j/Dn7BOUOpfTh2brgTpd14lPtsIcjbh1nx2IDLprLjWzM+bppQJElI81h79qfYk+eZVVc7ZPvtsa6rLmfNwMQb44gxkMdd8UFaqz3RFp9svzO/A5+gbUkXPw0LA+TDIKJMSX6FxS32TbM/lwNk48lCg/AOWM5wXaYGUP2Mz36VWIGLiK1GdyPenQLADUXxKaFSw3ZWY/5MvlSQtZpZY88LFDamJClLmG8d1c2EaW0yoAEiNa71AZyF/R3Ld60Ft8WwoqgliFKtIPP8ydAjlftdJ1EwfsltU6ridGgNs+PG/c2YbhDXqET4dXACVPfE5etWrqgaii369CEnMaL24dglv+hdeyo+As/rrrCPFEeAfvjmdI2zh+x87uvZ9fWHvsM+S7z/L6vWDd4Fw0XOWBl4mCloarjkfIoCh1xB1RU1w3GIul9EsXTXcC3E4HIFqOkDpNhGYhEuUEXC5Vxe9iu5xAYfB8580DZOtcMGdGhcsoFHdVOXCs3fp1hMIFt+ee1KgDufsV9ay2bQFw56QA6En4zzYHMImfNHNc9PfLzyhqdkL3wpfn41iMsGQbgge4pgLOgAMo5wFUjXDhrfdTTMkzrCkDe3n3EmPqty7edcW9VdMjjpkgFuqllU5EywPy6V392t5mVszpesI8+Hy/N+cx0pUrIJvXkwdQ7YqwqvK8UYp+QlbQFxo95sSTMm7uV9mfDXdr6Hsaq2DriHGsVa7Vca8f6+MdDsodiXl1u1ZZcbDV4YPlGvGAdbqZfdll7qqIhPO1fvp23T6OMNT7xcQq2NQcW5xGbmHBSYq8C5Rym7wGPYhV5+Jo1XVvd52R16oqvGN1nwpfMhE2R2U46jTrMopsDNjk7p21zIfk9MJx9/4mHtAnKWQhBH/3sQntjenDyZ4lrcsfyKTeXWUmlxDu6NwDnYqdnUmNXnYwe9JdEVKOr7qEkZd8DTyqbncoO4+QLc8c2p9831Oht+9xZA+xNGIy5fsmzTnWBxVkvYSwbdjdU2V4BRTDWinize33rm93uTOW6JQbchtrM/wyvXdIQ3zCGHMYpyU/PCeSIqlx3fmCeztnI/qrv7taFeF5Zw/Rtykx0CiC+S0V8nPvzhXHBCAfEy8jd1F5vu7Kio1LIcKD8jNyyjpmOw1jYTeDNkVxWQbWknWeu6XUVIovT++t+CYd+PNTQW/99FjJcX3zO0IwOItXMX+Wel+WK1TpDOo1hni1vwekniDkcrB9LsJ0nPTT8WT1c6iMLr+gGKJm+ar4T3R+Me5yNOJzcM6km7diG18bSNRniA6mLbf0x0MmPO48+qUCnzbFVvwaPzUii9yAxS2DWXg3GKzO9xJ0KXheKkqW943y+YRwc7XAuSUuq65bUVszczE7HZv6Ek3PSDyVCsOhSsYWg6eMvucKt0vGR5jIWTa+BrcFp5ZeWMXKFpUS2R2lQgQaIf9A38inii9dku+chPH5pDfPUm4rUcUT7e+3tJGGXUn8hZdHzYjIZ6lCaN0tNZE/8hqDi9va6uWQK6xwW6TkyxJwukfzpYPFCpIIQjOtiSwsr0s9UbMXF3f8cRs2+JAl8EuHiYdcP6Ckl16lKLHWLvI+hYZ9c7pFTllNlgcm71wPyE9DXRPDfHwy/Bq9Ad+Z0MNpPOasBk3ts8KKFYuf79OIUylJA3e1e026ZSqfFrwUzpYXwdwz+8pa2QutyZJiMFqXJGJPWhrE5DF9zHFAXF7b2wXs9Bqd9Ar9oS2GASxQ8clOpZQO6Kx2r+M8waaqgiZlAuEa1gE9GlSDi0zQMDoFYEOaDa9ZVsuZr24bShc8TTERSca+qsmxuB+lxm+HgmVc1fG9VD/7vtSCGsej1gttWY4fk6btelSqqzQhn4igHshA9ol3ivq9Y17B2Gs2XQsz96zFYBNpkJOj4seNWUX52fMM59kxNm9ucDLE8ASkmHJehWc1HIEJD0QQqENwH5swqQ3fVBaSwDzvKb3kGdGF5VGHL+rKmLQD9grx2XE8P/VyfJkUR09CnX9yCIJZ+HM1KEVp3bjgmJyjxvIovW2UCQIL5+a12iiGxveeuGDlM8UmFWFpXgBnAYtCMmQFuW1cEJ7LvSLCOnRn18tMQo80T3JOeUlj7OUEjYKQAMqmmry1tLtQpqGc5PmYnGPhtIPPl7bV15wNVQi6nsitHa3kFBdnBZXqmtCooe6mtMDWMertSwLDWoxA7F2mqYcp6oMNMjxXIoqSiV/ssych9mBP6wwvmrq0MuIKQqKHj4SYYlbzMyFC0wLSyTKOWhpfCHleLrR4h+AEvtnAR7s8O4zKWvl4S2UsSBMsrWj7uNj4xdzjyPVndpW2zHb7RuDocERNmTaeSju9Kgw2EGw1Ed4GpMwyIMfJB+3xApCLWBFmQxKFxldZG7Y22j1Zcp7ufBPaMIgs5ugoNJykjgtYIwuErm9KcVERsg50gcVq66VdxjK+lzMPMFMFYc5IaMVECMM3hHZo2+sWBsxp8yIX9yCx2XSvz+5EdFtQY9+FgwHQKs0yQw4MJPJ4IvM9nMB3dAEj73pYH0d9RAzIAyEVmHVkJUgHCz9MkxXd5vTKXt9wwO3LuB0b5qBeJOSZht0Aeg5aecZcYiYy+kMk0CycpSdl0tOTpvRnnz8uoqDTVmv4Mkj2yrSbucgoikHuizYW2FopsZzjyso3xmdxatM5qwW/tH48tF3O8i32GtkQlk9uxOIG+h1GFGEuTsG3pkgdZbOXEQ3dZ50DD/rOkIyFZDq1Rw9qj315yai5jz3NptRSu/JtzW9Y3NuQiQZ3Spjx2cW2TeWgvmfNJuJnO9HvT4WKi5V2Ktjiqf5GiqqVREZNPjdJbTl70fJO6E40e8T8JL+HfXDvZFE0uWvt0pZqZx2LIeKFXQ1aZSL40q4iK0BJvNx2ozAZJhlCqqNL5G7BOIXWroqQZHHqYMWj8nrhofqKFm1LaxbBQH8AhZN92oxrjwqEXqRyIKvHQE5OV+RCPYQSMufimffQODUHyyeeg5ZPFNmnSiCpXQR81o+pg+7hWCy7gvTRQV+dAUy4QhQesO5L2iGgb8gPPtX1WbGbTJJrBt9nuxRJPgtNbbmjWRc/PxUQMoJm5owHhN41zWCcFTuqt5yMd4ovK+2mDhsgVhpKPZDx0y3Wl4EE3oUu1ooZkh18PvD3LojoxY055KY/Kj1J7RHpQXS+YfMCBig0vDHJrqJjlCsASa2j4bIbYXmAXgPQ5QaMLA73I6lAgoznJ++yW6e+pijToU0fXjYTQcR6+u5be3at7i75NSXNAyitnV13+3IhxVk0/QtsJuiwep4OJRuw1vtuiV33atmMVDlB396iXiOVIqt6cAElR2MdVXDoX56DTKWsZ7PFbnxYXcJrJj0G6195FtVpqOqhaQuquocRmH72z3+6YgmlCZcESt05IPfBbH/dkcdYCiXR9HkgjU/Kkmi1sqwKfgyys7ib2gqqmT1ozrHAu0RBDUGtrWUpXg4SwIUHgNMN8RrTGQlr1UKmFY6T+QB0bQ1DNdt4Z5ds8CVLNV2csHnNrvSi/Lq+98pY84bAvOyFuCvGTJkrUqW3t/Hg3Uk+eZmKzSU7ONyJVBV7L3pgFzcOm5hgvHfAzXGF5rfiGlwliCEobDawDyw7znwXlpKxtYjIxg+qlLpPUksGEBeNMO9wl9NNfFeUAv1uUbjckeyT4rzC68LeOccyXiwQ4Ib9p1wEidz7rbnNsOVH+3zwxATJoClVgHXqyXmHF99PV7zFjhlHHGeCjXj75XavZzpKxXdsI9/ZIHTg+v4U7disdfaFSmp3YpmAPm+jqT2ezzOc+xiotsd7INoCW5SsqdxPxexLvb1OPbQx0aw5gZCRCcdMStti4qXrMRIu3m6KF3P3FpZ6WOQDvMw98xo2Y1UOc/TtSZsl9FGt/bR3WNbem3lWqHGjxMnJNEqeUqzMWQOVsh+L3NYnr7NFZlkxjk5uwAfC6XSqLPoeNVMPzcYmZbcdJN5K3Kq7Foy79RYdMP/aquRuacbfWvWs72Syp6qqN6aDChg6y8rjDzK6nDSv880Sv+4dNHm/vyjhee/5unGdon7QNnq3R6afE/44RREGqnsXyDOoVM9aW39kVS2VLFEPl1JAgqlYK4uv1d26G4Dejsflho2N5Dl0383wXiNzHtCzdwjHWwKQxF4znk+C/eQoFzzuRetcteMGoZE4yRzIn/ujP8Y35QMbZb/OCEeaca4txZJOXbcvBMGvHCj5fWZ1bKUFUTKI6AoEKfPp5y4tUsELiHV0+iOgUrpGCrmz22Lf44U96fzes7PyXfvGiVFyiWdj2w86A/txFVtSmEmS3KFvSZE8inv8HncauHDw+SE5BCzsX2rBaHB7j5F+O/PLopHQ80mECQA+hjC2/Dj21/59SU3hGh4tkenI1R0g84aSqxv16RAEo6ly25xpdm39JmBCCMZtIl1M75EaOMpfuAPaHDdyift0aCgb6qx2tnNCXXK1leCqsr7E8V5WTx1XehDjCi1YyX2W1+iTfw4KroIwCamXtr2VGQcXeHRVu10Xy8Je7pHrHANWO67csUYtxOrXEcmVqE+1IO+qtk+Pm048iTsyf5e1ph8/fK3exPee0qpiH9wtpPgXqr/u9N4dvWWOZIKv3ZGKSrgQKUYR/QXMqJD8FN68Y5AtdPN3TPFsuJw4aKIuNqigM5siUIomHLaHbwgpPRHWWZvdFfApzHqcWkWkT42LxnfHsnvgVoOY57DbaJCHO/b9Sdu9AIz2BqzLCxy5ukeBsmguFJh3bqtWE8X0GnrnMoaYIoqUw1Hvix/Vzr2Ntb+dOC9JRMvApFYpxyQan7ajGxokb2RJ328ihGEYHx1kLRHoSRBp/brrt90dqWgZ1PNtMRVqcJDeXdo7N2YaW0YLQG9zn2/TrdgWbOuUMUra1OdnuDAbmzvdogiv+2mGELKe7FHL4gDuHMVj3BzbItJkyzoUy/OMvpRGpwUYf0ZuqhZyiOsHegfNvZks2kTA3jM8s/pQ6wUaTnGwPiRKCDxnwozH2BmJ5vQmA6jPyyh09qe1xtPq5usxkOhC6ICngniG6gZ3ogi5QelTXMSOkCnlcjqz6rboCJ68cIPyVVbf1l0iZBxMjQTbJb4wmzwQe9Ita3GxeZ9G5NFd2OOkRhmMoIlMLihq8Ro33e4vN/yZMoZRSaL2RANLKw0yu+U4VEVVkW/jTY6O7/b0DdBn6rZpnaNNyMLhU+e+03fGHcm07ULgxJER01AlUC+zXOLmDacUS9n3eqI7XkwDDTpVlt7pChNvGD7Pi16v5yqC4G2nSuR6yEbCxfapSvFg0pRb3Xm/UdqFhLdsfTsKyuarMGGPIwOJflwNOZB5g1ofn3rdQzg+p0teh6XyYDJJLgStOtD2mOWorY7XuADm+9LcZsYYsEk6TQVEn5UWDfVf7mOah8phAmPMvkwsjSaUupEzPpeIbebeuF4g1kAYW3CBfrvgQOAbXKwqZlPO9CZsvdF1QdZTVmsjXhiMGZE+Pko6f1FRyoFxEr1zpEWJuZVdQXulg2PWAt8uVpxNqzzC6CgWFRufOhHoU3tR4B53gdRVW7FwvTfRsxcbtDz3oYHjvHlxyXi4/syY9qDuMIQNd4ZBmVqFj2KvE3CsFvDZLirRmZvoiaZoiILm5sA53c/oQXPS78RFxnijNVk5DhqCr2pJeZmyFaZbl52HI0qQZT/n50C1oUXNt5SJQNc/RRjj2Y/jk9oXAF46PruohYYy5iSB9nxL2GtFs8s9udASbGTPpAkpXGBlXc6J224F566pvsyLtWQKQLJNf6xjr1hm8TKWF5NjD6PkNco8sHagFpxis751c4taL52XFHtOOz/lnGd0Y7a7dKaJDcP7mFhNJMAqeBRxyneg7pCgC7VF9jJ2Tsif1SBz7YREUVJsCjrJ5qAwqW/hVPuGsA7uGJd6y1RnraFAEWL6WQDbL7PTFy+plQrOq2j1GXG6j3r8y2cYF+zUkIDdtRlarDpbNH17cjdf6mLnNAAchee1Cw0VAeuFt88Ik+4wCZUXwj7bOnc7MQoo6anIHjI6vDhGFpuZoQT7IQvyyw2pZEYfhbmX3AzDK1cpvjjTAb6b6MtdceWAOH2jM49wmrtu0eLRjzG0ca3RLLO+mM1t+/sQGnPUrYVUQoxx1SgfpG/atYLFWKTOSFuba/BrPVUI/PAfRkeaDr0NWshgCFK3XHFrPKMfquJTZuGZ2BZ8ul/7GpsRja7OOF2/BTBs19fYvIxK7e4WV/Rm7eMspi6J19wQTvDTU0xeTWRDd7i1nPKBvH31BKcABrdJPFH3P1o2bZQbKznLsh9g9vMfXTRhltLdmKTjdSBJ3+FSz9fxd9fOfNgU9XEdZrqmuEA0YIXt5WuBh3Wd8MO9ExrowzFt5/tSEAR+ObSm45zuPzkEcz/ATLMLadek83hcp3z9Kwp9+cTx5S1MIj+i2JdDW5HM+ZfDBPgj8uVgnhZZ/vXLcOjHezvOfTicvhzK/vYFN0L98rV3QtfOpHX97S4+ryGgSL58xsbyk6FxLCi1OWNNXbc08j9AGPj6KGG9pF9O/AHCPuOSFBeCA6b5qL/84bKF3bfx+o+pOK+j1HUC2e+fMfr25+tV9vX35zJTH7b/zHX6iwSNYdGmybcrRuM/c72i/Xad249/bu6vveHrd9j014s2mu5f03zJ3XU++Affewel7yH/dhT62U1Al6Tdx/O5uWaYBa+XYV1k7fW6Tt/3X25pLOKwpr4ebookuT9Mj+l102H0uRBwj2hXtPNHklD6B/RWjXCZuy8P9rnwNI9dlTJd3d2a0nbtfZV3Ude/PHQNiPXlY+TXt9/O+AGCQer++TcpGIGAP9MwhMR+RH+lYCjwa/X6pgH/jG4Bc5kG+FAGwV4tddXMD8L9D+hXU5gmWWp9fduNc95lXRvW3N+P0mO3tEmafJ2ov5+jdl3/dXLKdJ6Pr6N+z9vPZSLdi9m7P349/Zd3/teL3a/Z/advjm9v2utxv3wIgrBvB+7PgT8CAPLtwN8//Hn3s08/07G4Ru1jWb8cTL7e4pePJ9Q4dtv1Nqq7uHrllxJ+DvNFXX9X6C75gYn754sCjd/kDPyn5GnqljFOf2fOvp53fWGWzr9nN4GvwnVP6e/K55jW4Vys6c/u4y+XtW/68lMr/gvhm/Kwv19OS8PVddFPt7z1P5m39MvRn0wlveXFnFp9+BmzbQz7nwtbFMZV9hFZfZnrov3nJ+jPKzxO/FzhYRz+EYZ+pfHgdzQe/FdpPP4/UONxHPlfrfHo/1CNR//XafwvXfy/WeOzJzqez9AxHiRLp/zbbTDnu3YX+xur+DMoEoQuGPlrDIkBP6DMDzjt/Gf9ecH+5/XmyyzgF27jrnfqf0b36+vMvwPIL9/6DyPI+JqVz/T/l2JIEPkvBpHQNyX6JmHgrzna3xjTT+UL+lfJF/nPsjMQ+QN69qfl9Lcu9H9+U/z+FK1J/2aZ/sD0hFOfxvd4v4v99pdfJcz9OjUfmavSOc6/Suhl/ZKPAwJ+Q8r+1dKEYcDPpAmFiB8B8tcChX7HXqH/KnsF//MGC/iewdKvDwBiGibTn7VGPwc+fzD5P5k/4g+Qwy+F4qc25zqT53mIYW4FCqf88/X/Lv8FI+TPrQuAf8d5Qd+xLv8yaUD+QBq+byq+zMPfZQD8tTgIaZuO4dyNX6XiD2zEHdH5ldX4l0gPCHxPfAjg/vkz4sMBHMUz/yaJQRH8WxDjm8xA8I/IdzAPDP/4HTPyVwjOU/H7ect4zZpRPMeFHfhP71to4/fA5187Qf8T9JsAf67fKPxrYw99D5z+q2bpt239f5PQ7tIuU/pT2/BnwqL/rZ7gE0z+WbT3Xxxc/hZMhv7BUft/weR/iGnCv9Bk4Fea/K8KJX8/+ID/IwYX/BMG9zeM5i9G/cu/f9BjhmP8U1s+dnM4F90tRCTwb5pA+BfQG0aA7663fTdUAP2LJvGb3/7rJvEXXvMXk/eLKfrJX7DPv+9O3i9n62/hovs2ijb7N/pT6Nuq59/iPeSPwJ+bxL8C+Jyx4Lax8dyr2l28vilR6G9x9v/LCO89cv/2CO8PP4nuAj9C6A9/FNz9FseFfy0gnxv4bXD2k6hvXIfTVMTfjfv+pmz8NFj7XRX6toz+h9Fa7OuJf12w9vPR6+nC4ycnfHV+f7/y8z7wEwAP/8IOkdBPRfAPz0e+epm/i+yXO/i7AP/tUf6UTLeHRtiOnQ4R2r5L/j3wpf9PrlN+xHdp+m/nX9b/n5Tyn8r43yT+D6Qc/LmU/5GQ/4wb/Gmy8SP0K/34L5H4343s/BdLPIIjP5d4CPhdiUdw4PfO/6cl/ncH9f954j/niRH4F5P67/XEv5G59D1M/I+FMonfYoef1Zf5Gu5J/c/x/0crLMTvEKufyNu/C6Yj4C/kCv61UH2PZ/0Vyy3fj5jAf6kv/DoB//5F/b/5P/+nvvE3nOEfzvTvCxDPg9fPD39yzf/naQL/SJzvlzkB//c+FYSJPwkjvzmKf/+a/+/f+E9sHs9rv+O7/pLY6+WlgO9Q/Q8c+kUgIUFTIkH+i1g/AvwCSxDQ91g/9r0Ft78gwfa7FgX8a9H1/xyL8t/PYPw1SUS/oZjkn7Qof1Ph/zYm5bdTDP65FeG/5ips+v4N8PUf1Dy3fxaY/S+1iP8fe2/W7aa5bA3/mj3G910cD/rmkr7vRXtzBiDRCxA9+vUvLMdp7JVs5yROnL3lxPaylhYCqp5Zs6rmU8DYZyU0Av6b4fCPF9BecPg9weG72sSvB0Py7wLD3zrv7xQKnVvzwsI/gIWfCgbfEztEXnD4HwWHv9n4+zke/hb+/PVo+H4x5L2tgX+kSvqjIvP8R36Wy3/wiF/oNv+scsQXzsC8/XrXGX6OUz9tDM2H+Frefjr6D8vrZ/3t36xD/A5o+kSLPrV98LdG3efQBBLkO9gEgx/wb1ZkfS8i/qFC+a+o6f7h+gKEIN7ZbfhePZz4VhIR6M9erf/hPQ2I/CUdQN/R+n+rhsa7cEv+Z3CBnzMB4ESmP6/4/J4bvkcXfqsT+5684Wtav+gfYgCfqOc/VbeAQuTP/fvf6xYw+Lffj+K/9f5v0/V9t2H4QshfL6dDv9Se/5UI+T4bQdDfhMgfIO+X+e3PMfEgoEP/6bWPhkP/UfnSe272YwP3K/fc/jLteS+B+rk/3prk7Yif2ta/CZrftMIOfyWEflrm302B/UsZ/qtn9xOTJrDPQIZ8h0l/q6LMr8DMbzOx/wvMwH8dzLzg4m3VffW+fvhPF879Qbz4cmf/Wfn9ocT7Qo031PhsK+V3gRq/rR36W1ADIcmf48YHAEX+W7DjvK7PZ5T83/HkU3XlK+DkbxsT8ivng72T9XwDmWT+kkl+M7Qjf4l2CA59+IRjf5dSEvpjs46+QbXqF0D3ldsGPqtW/ZnFqq9sbX1/taqvLVV9g+b9+8UiCMM+EPgvaxEITP702qdDfby0H376M//+ffWjd10e+TKR+2N87BcTSt7th31Vqeffzcb4lWoT8IF4h9XFNyJL/yTYgj6DrXdHMn6r8Sfvm/C9gRcvE/6WCT8bQIF+OsjfZsKvGHz2MuFvmhB5Z6rVX2rC9yjpy4S/XohH8Q+f1eKRc4D0XzdL6n0z/q5+ysuMpxnxz1sq5F85Eux9M365p+Rlxt9pRgT5+834u6Q6LzMiR1bxGajCKPnhQNaffv29FkV/l6LnZdHTop8tzCNB/vGlv82MXzGb+2XG3zQjAhB/vxmh/3wz/jQG6nj9x75E15w35s/JRYAvMZf4QJLfD+b+F9R4/gorv4PDf/f6/S8o/fwNlj2h+W+2LPYlTxKG4ycggL/F0zyc4xL127R2Q/2Fxb/3ThQI/esPjEJ8O9APFwv8Gfb/XBaKH3kPBvz4653SIPJXNqWwP2G69a+N4/f+9zVy/1vI8D9tjvqNUZv4X+pD/wVx4s+k6yiKfF5fRqEPBPQ3x4RXl+D3WfGzpyl9FzZ8tQl+lw0xAP/+bPjqEfw+G342ae+7sOGrQfD7bPj5c1++AxviL1bzO1kN/EUNA0M/ED9Ldt57estfatIXxfmdFOfL4uP3ZtIX4/mdjOfLHt73ZtIXAfqdBAj+7k36n8WH3hnL8aca9Gyqf+cG/fW5X6+nLH73T1n8POv9a5+y+K5DEe8pN14O9Q9xqM9T8O/Bod7TkLwc6p/hUF/UVr8Hh4JeDvXPdagvWy7fg0uRX5rgHzi/6ts9O+Frx1d9Nw9PeN/MXz3K6ocjfifTLMl3tAEYEPxv8y+U+xfK9OdO8X99qtz+kzr4v7VP/y9BI/KLuhqCfYlF706K+lZNfPDT04l+dbTXnwM1v2v5vyO1+H3L/2sGKnwdHPwM7qDfhXdfhRvvwgGJfSVukH86bvzwo59tK8Y/e1IZ8rk3fjzTP2M78W/ekF8CkvCPxKA/pFL7EzAIR3/5nAME+RKA4L8WgL6cj/ACoG8JQN8aLgiU+ECAMIJiOAmA0GdFgk8P6vx3swjeOSz54bPnxiM/DTP686HoV57K8UoH/7npII58Vl8AsA/Az3/9zdu9QfCftKni53b+Fsb595Lpv9Y2XyEMOW53f345znfux1XU/2y21g9r62fjtv6dBZM4rfM3mxvz1JTt7Wtt9i1MgnwpOf6LjfBeR/F3wS/5nmr9dk9u1+s5NPhXIPKPf8RyQOUR7r8Ogn+dxja3bPq7SSz5nXFYFEM+oF9O+cLeobHwN6Ox7z4A4o+7TX/Y6hyiFjefjpYMX+21L6/6I5kRSXxAvgw6f61Xvfusij/uVXk8HQzim2Hd/ZYWcVuO95djfgu4g/92r/z1JuvYx+0nO+ufBkD+5AU///bvcIPvpHjzhwrIf/YWQ5z8LLEmoA8I8LP94V9qhf7SuZdH5v+Fhf/eNtenr3/f4Mtv1+Z6B1h+7+DLX68zvTsIE2HP/77M4P6iQZiffO07aXSB0JcpnbOW7f/886DpHVf6E4bHfz0YgST6AcB/Bj6fcSn03xZavtX258f0P+H/0gw0kVPjKf1dnazuf96JX/+JDYW/YQLz+UC+zxQXAP4BJX9m+y8frPytGg6/Mn7+y3YSHad1ct7b/9b5B38GBKDQB+CzZY9jH9B/U1+FvpHttfl/kNImjcXU1sASS53tmnfWvZFUZw0bAtjb9EOF5r/K8H/64AsE/Wz9I/C/L7KDf6UTfLnPhIn7j3kKINza2xC/uQHw/zm39MPxN/wB+v+/wiv+rel+elzq1zvQe8XhXzYAvijkf3IL+C9a9xgEfN6Xgz6Nhvi5jXHyx32Cf3aW+q6Zv2LvyadyfToPzU4PRxA46eu/q8j/MkX5RYfl//x4XvAnuv6JlEO/6j6fP0fmC5pPv/3355gXPsL3L/cNkMSHL1PL92Acxg6TQ9/IvF/u6fxphhG3TYcxP1/F4GsV/8Z0mU8Q/OMS/pKmgeQ3qjK1u064nnt7JGibVXz24Kvwf35Hu628x2dC+aMB1Di5NeYPlfPj+0k3Td39eENzfoP+saP2pU0/N+F0lhi+bHS/fSD16VXg0yvnoeIp/hdMffwnxPdncZ4pPdqwV0AR8o46fumOW3Bufnwln/9kBIYKzy/23dL+BdHHV1riN6wF0rIFaLkrykt0b8bIoihHunveeg1dovbbZnZqmeG4mqacudA5paZ76WJRdb7kHWTRtKxUVENvHuLz3o6XEqUHtmcnzJhf56fSTRUe1jzv21dwjMVdLp0cw/F9yXTy8LDDq+hMg7LjL/f4jWp4BVCxCOdO7KEPNqSOFyOJ8V0Z0UQd8KvHhaSt46o8YKWk2VTRdO8wtVwji56P1co3cAp3rO8+CFQZjmDLG8dvMLN5q2oyirFGm26Z+UJkUmlSCB0Qsb9TOSM5A+yhlNJzh7/Tgs8RAoXYSGPL7eM4AA3JKeoiqUnqdS2LUr5QGE2jGV9DFmvhAEeOKJOxwbGUdx5qrIrjcgYoai5d6WyjaWoIt+O4fvsk7TIdi6nLILcrClNivON1G2zJgx7yiLZm6C15jlb8VB4gARynu8c2umALGd0X4ngPHdwVyj7+hlKe5scu56IMW4vhustA2Gt2UKp1c78o+w4WZy2Uoyb3aqVUH+Yul9aPG+lw6nnTzzubc2zJA/wiU13JATnbm/XmOgMZNRFSUfMmMnDHmxx9XeuSAoqgLN1ijx/JJVHkMu6yWAT9i2T4Z0rLb6h5rH4aDLDbg5UufnwAI309fksFv4ImEC5c6TdWvdOQ2lstLbGaDZY83Mxhf4mju2OXtPYMGf0iM1U1ISFiObnMgskWIl40DhQ3BaC8tGseQHvlaLvu3nSSjpWa4T1y5HUiOO1NaHQ2jg3LXAo859JetNfj1VoqGKFQ+n1DEVVO73LcIybQqTmte3HLqHoJmgmCJswiMg1cyCULN0XXdlPbHA6N3RTxiPS0w49W87ipXAnLSB0Ij5tJNxLdrxZV5FTupnefKndKLMEHJqhLoeq2Jtw4Shg7zjKUEaT2yXmzXU3InVSOVMZRdzoT6WtqHq+PHG2fOBZdD1I/PFhVxaJ1wU8XcZh+Y9t8Ly0aJs4721xleoCanLla0vEjmHjdkDSji0UsnsrpJ8fvRSTPBQbgTCE3j8m1NiQSOt2nZbZ0RI4RpBCwOHjL9dV8rnHQmXmq6dGskZR4+uPIuISchLZYGDN3HdWJuSnoWByHLEUqiKXzM8K5iCOYTJkJ8dO2zVc8P69AuuhzTo9sRK4HiNJ3scqvzFMm7R0UueFBqESeTYsQPGFdt6GIT3enQBm2VJPjp/HSwesKFgXbBpfh4AH0oyif9j7q0HVF6wMt+NY4X1URFn7QxDSGQZjN63jNJbzaYEHIqYuBhGBYwiHnuDeYB8/722ZBrt6488eLXQrPFScef6DIESwhXnHG/hqDAB5cjvcOaP7g3PXh0gjxdC+9J48Hwl3YiDCAPBKlJpOvxYW9BvozpRqDiMHD1/ngjEfV6fhHOAZMWdePr0mYMneTzAJQzB738rkKK48uN3M1s8mMxpW1AOuSw5xwvNdq4g6YRYVZxdPqUEXXUMg9JcobpQydKW5BqBtjMaVEqwfGxA5NJKwiDZUZ+dR+p/B7LsgW08c0GFNMcvy/BvF6OBW9suyNWg1KNHJpybUhN/BRvQIrf8vv+MouVslwMhNTOau5qsbZC8cEPOtzOcV1PSetjST0Ch0xmmNLvHZCUFj0aa70lBoVupXfUIHdqbY2dUWAufAm9bD0dkWX8sG7HkWAtg/biJOrUazbMS1h7AgKMsYfvrsBjPcwvIeGA8EA0GJnLyEGdUd8520x3/J7n9NRx6Hhvms54ZWYl/vyqN/D0gsFL9ZB/1j+bJNt18emzmt1Ga3IoHW4qEaEqo+jnCeS5XXGxCe8H+uDp04wfUMspiWUBbFwIj7x8p5zHCJd1iij5HC9R7R255BOjkJnKiiDO5GOGq82e+PChqMw6QrY82272CXbcnF64Tcx4GDiUm5VVoYQp5V1uEmX8rhfvFvusr13OR+Zft3GTtULc832kvi0AD1dQZi9MLougTLdP/W0A673PAWTsiTcvWqwTZ9KEa/vpkwtsks06s630umptoLm5KYCsRi62do7kZXuxaYYsW+kfb8f3sjLvguaF3fSRQ1ELwPbJCO4PU8/DdFneItF7I55XPoong//8aDmzscq8ZHd+gFrAL+EJoaOjQdETZ3tW89JvgEsbgmZFdyKdO7OUFKCqDxg16w71+9BIJYoVgVwu49eHHk9fr1hve43l2sMgbeoI3C7jZvrQyRPOHxcoWolqaizjaE+Enf66Qs38dneryje1YG3XF0eovH2eX0mXXu1iacmhV5OmPnJZMJtqmhDew5AKfbQrdlJUQWdxQOxIRjO+DMMmLgcFOvATBM4Te+bmJomIRUfr+ysKRkNvYxaUhSEcJewggU5Q7jYtdoVHhIT+rY9BMHqdGY9kAVE4Iy5cFetA5kqiNXGzkmeDMlnCFyq201Vt9tsXrYGZMkwwdiba6r2HUvOYHvlyAsHOBc6kgc56jW/3tCLRuqri7W9PxmXvgHv9OFxUdXcH8Ru3Fs/1oiVfWrB+W/znpA5diz0J7vdjUxzR3JhNkZlrLuw7bHqdVPmPCs1Sh/4YeAjbhTW0DS+DzqPIXpMcwybg3tNLoM/TzhZWomXQ1I/gvr9OsHTEkNxEqsDPY0iOEdQJU7OvD9wH4+A0yYs7mQRT04ZAdGYRu+76IUozEZcEABX2kK96mCqtXJd5Cd3mecU6kK7WGqtXvdBbJPL/TZo1xIFhvaMZ0d8zLkZ2EEsoNGrt55mF/sVTU3zuRAPbG5jFB/46YZLsC3aEhYamNxv9AjbR85yUCksSPTgFt/QClfNq0CS6e1yRJDT293b9Fyj9qERPJxDNxjfiCV2blcPvQZ9coOuiAe3WWTcN6QpVf92JRgWp83hgJBlIkgz2NWMvZ5scm5bXM+GiTCW4zPTCndDxYaQ6AT4KgPOqNEuxP34ywypqKaFVYspEmAUiwfz0/eYSlLq3JrYoBOuM92m5m0PEA+nu5F5Ita5NPFqR9uDumNMETpiIzlAMT8Eubv6BYtI4/pU2ceoYysOszLhZ9XOcDTnjZdnKU83dz+BzU/VsVocvbVlTOnqsLEbjzPDHqrkiNcRkLEB3/Cr9hF0m14xzl2yEy6968E2YeaIzAkvpl1b5BkPEse18U+7sDUVKFRDrjRLLi+jA+8uL++gPvfT2GKblNS47RT3xO2P0Fa59dX1Hh/pfFV00E1b+uLeMpERA2vbx7c6AeBFsDUsWvywGoRbiCwiXBStDuHKbJpVlWZtdSF2yx+ZhDgZKsvJQjzGRnUucsLnEXDYb5c7ePegFG4JPVuD9cHQysi1rnyGcmsoxv15fDXQXvTETXIEr/cr0jG275dqxYsBw58kwDbscRfbzEdHfEjSZXi6iw6jwlKfdrXSZu7ldI/vclQY7UXYL0Z9V/sbdMmY29RO1lSi8tQ45p5mqkBVMOKeli5HPiN6sQgFng79rCjS8Szw854O9HV5n7gNbNW+wwajHBU562fxxG7diHtw0tqnSSLWE6o44FlH90hDp9ZPodjEpuq6VCMeaF0Kqye/ZZPgAFNlPrgXrzsoHOj6mTHz8xOtYOKEN+wO3XlkVHAMXYUzeSm6ALEOYgvWwVyS4XwiYVcqamNhTR97S49q59G2IiJPOsQ/T18nT/oxMP7k4JA/NKQQMFF0vesoYqqTAts+r4CHDVV3X3rvybvNmSehsemLZ0y9KOaYg4MijG5MDmpHjAkIPpYgSaAU8Tuk7ZvUCGalNx62Fyzg2vqBV2eTis75SRcFIIcyvetgeOrc5XE/SfMQYwmrb2fWKN6Pu+ZM/ZMsMk99ekk1xF6yQPslEZasHVjSk0D4kgJGvYFqTPLxwT3a58Dhp2cRCnQ9sAvc4TjJaHySyFAiyycyHxc5Lbi0xCC+Z9MZyWYPnbLHiqxsf3FPnu1dMHxMbAqeKv+uJ1Zasc8QPq/YvRrGka3pU1EShnfGe8dUWVAxLgdd5+OTCG7dXSM3LBBnAK1buEpNRUTx8Vy2J6KUJ3h5t2p4kGr9DP2r0d79zAySM4clnwWB4y0Lonnglwgm+tsy+k80pZd2IK6x6T2G84zrLG63cTxT4X6I7cS9LTSPoOklqLG+GWBhDI63pwk+25lh+s+EgyHoYPnVyVzbIMPJCGYMHIQjkqhgZkYH+PwOLonyaZkhEwM8g2+4l812sWuGbyN3E6/wuwnjqG3idUoc6wB32ugwFK+1pEtc4ZNrzawxEifNVjWHwIK6z8g29N5Sl6dNumdOA57VjOP/ogUiYIExAzNAXA0yn/CXp4edjgss1ZkPxGcMkDFxeKoZmeHL7LBAmE0iWGdPGD1XQeaMJyGXW2DOdnVFs/uAClkCBw158nT5TJHkmwllR+qWxnzoUydHhCiXVimEOoHsmSgC7y7QSZ6lx7n2ppP1HmyHDz3/+BNiyBulX2j8/DjkyPfAdDMo05vOG7UG6OQzi1CnrKTVYmfgcw5oE1FusL65AZ+FfUFdycwniycQxUzKBFEAksvVIzOPgFaILnC4SLu5pxtX4cxUF/A+006K+lwT4OK1MBYgCAl30pM6z2lDzj/SJ9+zOQB36GJRFMUWjc8194sLUMpZC6KOP8JrT1HnN6mzCkQJb68fvx4U8/PXa4Vif/7v4u197NMDHueLZ+yqG87y7CMhAa+3jPaB7XQ71XV8XnI7JyxsKNECOzUsTXv4rpXrztWQuoTnMsQSDnLdJIInT/dc4d3HYSplcnleHsszf6zm6zM7byxOBiey4E8YnrNbSpDo83w5JTQ4Ox2CRPFmTgkUNs8Mr3qaD+cmF0y65JHlPGRib7uzMijbtLRSpkQBvEoVKOXP6rO+QJRJxYBsUHTPnPUnkazZXdApZ5VuFj+UBMqRObsKE3PJqdhqzOI8N4ooKosbGbGmY2Qx7CP2E/Rg1Qgl1PIFySDLPBJr+us+R598tVVMeTkYBg/clwWGfJyf4cuwnW6VZeXzbVGcHsmZZxXidsv06/GNs9hxbZQoXfHr2/f57FEGA4hRS7Yqy8eXdidOSg2AoMM483y/63bGgLn7YCX2QHVg4MstlHT04/EhaEsNniFufhCsUU75xVk7yYZhAP3pScS6cDg8M7NxtizTliI4vm0pfn2A5xLBJh8/TM033pZvrVee6ZWzuNR6LqVoJPwkwZGIohLKT1Bi9kUb4k5YcI/Ddnc4mXIhPOJtJ6k9jO+IYcx5Pt4EQSAHE+YpY7Y4y++F3sVuioyShFvvmdnAJ6SYplnUl9yr1I4dJgBzZMNcmhqIr6aIi9tWzPjmU6gOlXTAuHFDlJEGZQoAIPjlqgbMYlRG601j6vrErKgqGmeZqcgP0K2LcE7d21gm+NDLdVMVl46/K7lZqB1wywLPIxYQI1PcA0GDupVBACNyrpog2Ew4wttdEmRJcknRiEDSHmBrWqI7l9mWZWnuc3rYBa5yj6KNOJ2B1QqTXWgvfVWz3biKgEt8xMPECg2RzjUDmEWUPP3iblhQsqEpKZR2uOSX8UhEqIVJ1GWaHnL7RDFwbcypVH/0ncd0r6xqsKPqJLYZy55lGVUUYUJT8+djDBxq1N27SgsEoUkqMlIsi20QOAT7vm5UAJ8xaLJzIJbpA2loQxDjJ+9hB9ONIWVZstA5MXnfCoJxz5O+Xq9T3fhqPrCne24hkG1we1z3BTvxWYKi58M4DDeg5ozwuerdn/Zjuy3e4/DqtkXqxkFDSVjecNnXeyqcvelR9ACWlNUlu3F8cdl7Z4+p8bhlazeR/eNBVnP8hgokGgVB4Hu96d5Ku44KVNHaHcNxVjbqR8uEjLoi3bjs93me3eRMQzusxoJmvFm1VKYqgF9HmPeJuwR6IGte8HN0IK+mBILae11DuWItwQI3i3XwYqS9XubdCjVhX2z1LL+189tpGC2XxQyT41OS7ACnmVGmB8dnNvAKJHp/UZsSVHdUsW+8y3ISEMgMZCmhJ2bJcNhBM93tOoJx7+3LRVUQUeDi9gl0LYIPy7IvpmEQb/eySPwyUepHTnfETeXrPhmbyDsj5bGWna17xtqdIUfBvUUWi+m6aZKBcnmmO35t5IsGn7TIHfYzyCXhVW+t8k5cLZr86H+qoAnoNT0RBwb9zpWkE5PwI/Ea95p9DusSaqebXQ3yrK+chQX+zcmSDL1R5dOi5XseYqpQ8ZPV5dDMuJNhjLyGR9EJPZMXss4uZtdrlO4DqBYd4AFlGOoCkdyXg2Qd71HgC35blFUeHGaK5FGdR6gMRFDeIZXLhehe8WURjIVKqXgh28q+Y9fb7Jdl8Cgbk9c2URY6SmDpPH+Yk0PUxSXQhdqx9ebOWIvnU9OFSEFKF0fNyy8QrKpqOkVVBpo1koWaGjI+JVpLlt2UPTLbx7O+B3FMx2ClaFqdRW3L50XTJ+6W3uIo2sNpGlnlyke1yNFhlbpKuu3XNA2jLf8IzieSb5CXECQGHidfD/wPMD7zJjmA0Qgl46iErJCn1iW9RnuD1QhkAKpe+LuoBXzesdDTAibr4/0eZ7SfK9Bfu42yc44LbpdzfRP6NLFs1Z5oy4X5kOaCgDHbPTtdJ1fGidMLdZ9G4yypPik1SbaUC+lCrrmLNXsXRcmWt5O9oXYBDJumFeZ59tSc1DHuXKdp2qBo6lEQYsbIwKlSoLaO5bjtWMbtXXA8fOS6nPcCe2lBPEpyQEXTOUgfxrF61Gi/jgrDjIZauXOgt0lDDUi40ozYIb0oR7Xbhu5VYEKfpvPDB+gWBgA3GyMBoO4ygErnqcAw/GQZ82RlkpRer0WODxJEbTnNPTDuLDhclmM9mfwits9NP4IetPR442i1KReeiXl1qYG1+zDboBbeKlhGHcSlYdzr+lnQQ+huUVMizyeOM7QljRh3vZKbLYoix93UNCUI76NJP1pweQZ3Z9MNqmRXrJJpIzwgJBiAXbwQ1Ibpcek4z26myowWBAjmlLNAtGk8BRdes8UyNjpFdVWSKLooGHXyH/lcoXe82XufMZBS0EsfUAatDnfAYnrBs53Addg2p8KzJDYluSIulBj4R0SOYHijdROlHFAYk1DQ8kfzFGQVfAAKk3ALRl5xHDs4b3fWEtTqJJNN63sHV0yjoJP8PaBJphZ253YiPIb63PPgEvOMTPblggC6hwBUYVHWIyz36ObtzNMu1vxC31M1O73jMlhv9S8a5MM+Lc9M6OE5M6QVCUrDCo16HbEyDLXdbxsaszrY9NsIb7nnGJuQ+5Xg+SNgNCqTqhZRS6DPPTwaO6sumxyyzTDPmcWgOKEogAgW9FnBGGH0ctWoRIveqv0wjd7wY0VUkJ2mbUCkhOt5SHV4zUHAwzVJhoIO+SDT047MjcRsT57lP8LxqYlFLDxBbFIlrT8f8UvPQ1XX9eyGFxm/x0dwKwuHG8aHwN4wWydJEoRG0Oomvhp6fi/dEENE4Gb1FiG59+FM9CBpdodR2g1YEM1rYdsb1KDyFIvlgfWgxeywG9A5EcIMoBkKCRQHhqJzWVVbeBCpXYqoOgEOApjhRE4RlaQxGLGzPlV3JzUI6DsVlia8M4bJM2+xkJfPSoRF3Ey+tBDkSszUEVFPIjovz6VpeMMNLNZzNRC/H8nX5ebgfps8QOUu9ODpeOtEABIxx0/3gqJwp4FKpAuQ1xSyx2CFIBeChGlGI+3etBxQ6vA1VS8wrZ7Abeg6OhhQ22Oz5Vlck86RpN+DiLaF5AlB99kxlYmn8jQvkJvenCof2nA6uT95SnvNV0736jm6nP2Q1eNUVHQrgt3NNUBcMcjSNBVt1rARpXxIpzSQv4qzNNhGWuYGC3NBtBHmguzlvCjCTQIUKjUCpZ6Nsiw3p3lq3NCTk0gHeurBAnRujuatJkQyE8ScbFC45y48iINJ8YbY8CBJdE5xzRq8GXOfIUkDhtuHSgYNdPBgj3eQB6fua1fNDHyBXNgdF1PSHEG+5tUaXcXHSh/J4844DDIhbmmGN6dqT+4R2cxhW0C55CZjNq5TkmmX3ZlGu18kjuMM+zHNKc3yNS6e7/fr8SOzc10UiC2KzwMQGfu5lJHa8OYqYHcYPjzk5lbVcy21EFiU2duFkTiTYpAJuMNSD/15UPvJAB6MT6c5gmh0ixwhQ2SMng8XfV7z4+Vb3sI45twUolRLZTPHM89EpvXStgzj3FagG1R2zx6WDrkq5xxp5gQJgQTTD4nXz1zZph7Dxq5J6b8lGvxy/KA9YTY7RHkRanY6+2rmOvtdHA+Pr9MZuqdGPj3cifPyBppriKK3lrJ3keBIj98LiMLMVfYqeJrOBxTS660m3ZDvU3oBleWtfEUZVGtHEDSPMWr1lG+0jTKJF4uPogYtnA5ZWCF27wwDW4DCsxfH8YbnJex2V6RmZeV6acYISTa3nTkbod2RdUQSMZis09w8E4pyWFzRxXCLi4Lew4A7iVPoeCWmec7BNcZs225n3jmP7opoTKTD3T26OjYz5BS3EpQlKwPu+7drc9DdibsPi5MAJz/s0/P2Jhv/pDDKNJeSoo8EmwEhh71zNmeMmNIzD7gXn9TaXNCkp2a74cxz3bsuaLdEaD09Q0Ad5LZS7jwA2P2Sn1mAsG0orooBEXCBtoM3ZrRcWV/icnO5ynYl1RoMADL38qZIz+C2cD6TB+qVPjIaoHnrrJPXayxC8RSoYIzMB6cpNScSofZKpmLPJ5lXHYCDZjeUolniApvjArI9JCOrHTvAAHmOKNz2RBrMi65twIQml/kShgjZXJ0g7C9XfRSx5gRe76zDejoP5r043ee7H0gP83ZNJPaMUGctEESALhHdOZ8D+Mh44hPeiGaaZ8cDYX7jrtRg8AfnR7q6jVvYaxIUGCiVTTVCQNJiHnIwNnwyZEtWRC5JmJayjoJjjtcsgEiaOT0ig7KyB2EIl2KmTh6NHNlunV90m9xaid5o2rT4x+WSefnEia7HxffwFm4+Q4c3T5llFbVouCoVh8NgS/mRGzy6vH1iW7ArxJM8pSbaqkwchV8cQDv7aLxjMkdwyTY3CNq8wKBFvLQdWDIbHUmjr3o5wXln345DEGLfrnt4FSO+K/w8D4x8Zizspj1Px1O5rlRoY36ARNUYo3RkusDhtsq4nGxCbPqxOQ+zXPqP3PHOOAFyVuMAk176x4yhVNBGclpHaSeOZnvQ/h4gcjyxt9bfD/QyXEuvseF2y0ei7p9jpSpUp1FD7U5nq20zCu6umvKyaXoSZqap+x6BUilwV7bg0j9M3d3GnJyP1A1S4umsqzCrLPk240deOh1o2vEC4a3AA1Uvc8emyQ4BZNpi/tN+KolIyIgi9vJtFLfrIpZbMkAbgmpZErQEkV5VQ6UeKBwcMawdr4rEGHmdJ3MA4OemEf4gQP5bjL8a+NUvIjtw9cuZ3Qwg7p/VbxMDsMMCfd9rqWUMTQxUsa2fZ3n4xVnaqLcjLRudhyielcSOAajWunSbYPknPHvUOiSq8dHmRIcaoFaiKCfzgDXEzrQdK+7IYREVBdiaqXNMG298HkHyovjt/doUTbhIIdGQeCGRnaD+wC0DLemrNhvAEQr3B/ikGCmYlU3Jw5sxUE2i+jXsW9y9Mj2Nq3ZtU13C5YBtTtqBl6g85x7scpUWHic2zvCx5MJHVhQ4XQc6/MyqO8KeF2ik2rOUNU6jUhT3az/R79sRJfqVAuIjffPuQRgv+MVjLCKT6QttJ1njUpz/TC9Xw/e8vUdzaSr8Ttvr65tLtZRi6fnyKC81cb+u0Gw59a17lD+UqNLu8pZOLov5gMtVkIIhUMJgqa/LQcdwKGZbirPmDDN5juktlYmYq6s063EHb8fJ4Sai7m80hZ6SMxtwIBX0bbmG4t54Qtot3mIfn1iJxcgZVh8dXaxI3HDDZXJMmhNXqXzakNpVevu8O08mMapTA3ieWmDdb2aWpS6GhmOoXKP+QV1ySjrLBX3N0UVCeafcxU96II763T8BivV6MaHdlK7EBM93zUaaPjRmtyXHXW8rEBeKOX8gl1MCtcquK0+ZhZItn8qPE2iu93ukLLDgiU/tzvI0COhzUbqjsaEIQvEyc4d258ifnKYp+Go6vVikK+y5bUQbMpbIt05oM/rAw9yeWnB23Qqi5KOE3xLyox8tjsVedKCUpUdmhOx6DQSuvJjeeShmpWi8uz+jaS+7S0/uqWZUMRauG+NAQtyYd0od8jyfyiIEeqVpWo8foeM4bJnfW6SGOCg4aLV/xDZEiai8ugSkG/kPgcLx5/M5Od7d87iJys+19Ezmg9ejhMaJ2nnHvVN6W1D1peFUARRDQvQdxyZkQSVt/SJpYl8H1daBT7RibxeK59rGqmh3F/IRNaDkTm0L4l5F2X0QVppz4iI8dSsT2QHETi4ZNednpHJNM9S5Np1RZZEic2VTWkMj8TkZuMoztzx4Ge1QnJbpI/98UHwwXGfxchse3W0otr3n5Np6imJObvcWsrd1pURac0sMHt1dsfKnu1fuCpaUxz9YCbPXhnZSHWRgmX/ASQGmyJmmIrNzrCY/USArUHmOjfjr0iZcxzz7Abwzj/xC3Gaa3Slce8CE0uAHZT/bjDPgA5QpDe2qtngnuoYwQAJCmyE77kO2PoKR4hNMwe3Eoohyzqg5m/OrhcRLmOTckd08lZPRMP3BD7ylGyZ+othdQQnmrMRgNMgWD/KIeRfo//7Jq7ipmFVeU/LCeG+9Ba7hL7UzW/e35+H+CVJiHPtsAwhCfiDf2xDw4dOxfzE7BvtGeuKveAjQP15PLEinnpj6qCdmVyXZxZea+KUmfqmJX2ril5r4pSZ+qYlfauKXmvilJn6piV9q4pea+KUmfqmJX2ril5r4v11NHD5+qSb++DYa4vZ+PL4w7j/TEtPY6tl2ervnBB+Hqpuzlz6fZGaVZFeKBdHiJKagKcnqlHCRSkpiXcAOe4nZXUkoKbe50p5YMLcr5xY8czf5h32sgPiWfPSH063nibzCmYlr+AwHS1RD2wwl2hNFtyVqE+V+0VBZ5iTlxvm73KyMK9EsNXMW5dSSZFMFR3EbJdGcY4ycxdFSe6VQ84ji3M0UuyaZjqgOkj+o53ZwPpx52mUKv0KYrZ7jpWk3KdbQE2qHFuymT+4ltPasdW7N59M0y5C8shGcOFmD6fTuUp6dqSlPjNoxRIWHsEmsQNC76Q9YrLaHfKoihQwPTeWp1o7ibRv748UyR1Sc1l4/icUt4VeRLVbrgiKEe3FkM5g95YjvGBlP9rlgbosbTNAQTGfGSK+UVNTyvfdRSeKZ3YrsAV/XfAxbs8em022zoJH0FAIZ5PLgaSZYCYJ4u2iMvNXN4wkDYMbuuYNanXQW5W6m0SpcXfPbcVpnqPOB+KrXvfvwuDtO+Bzz8OLxLj5RAqNLrT9LjeITgvzUkZ2TblOciC3BpQcmi8vbDkrnwX0kMkLcFJouaUfgH9Bxn8JSo6HgeZ2g0vqo+jOAxrz6UjA9ylATmK3xO5fh7Y6j6Dzj6PmxfbxfmR8Y4kd1xaIWHxtBSR93+XPm9quhqVU5rs4JH0j38bv5YTGG+UhYbvFJz2n1Lfadb8rE9vgMjnXjqyFmIoZhRNKCDhsO5ZmR4huER4P01FZ3hrNFf6BZtjQ8rBab9JyrtSLOhIg4LmFVKxby9kBOiZvIQrh+d5xBnjTQ2cLaRa9GYb3F8fMS7qADiAUrVj3gKHHbA/PDTsK9VoQncbvINZCWopO6pZyCum3sFpIakO2onh1NKRcJRaGRTYTL+/hwzr5N9QCIMVOKnC2bI2/dTruRNyeF1P4qTf7gWKBkIeBVtFPS1vZEbvYjXPPWxJ2dXNHw+eNqlE3ZonG39UdfP8o9KoFY5qsj3H+8cQwMw+igNOPImCWZmc2+gfWdpsozyY7sj70rPgY9VY5G6GDg1c3gr+YmyY/uRsXm+aCiw5p10LSXCRaFbDA2KhTgywjpvLNm52OFaXyM+8sDsvtVHPx+1Bm62hBMJ55nB1I9uQY9i+2zb584nqSTFeVAbMvZrSNQIsIkfUii6/ZoHdFCYJleLIER6uVIa7teb6tnVDl43IKnFAhqjkCZm8vQSeRJ/ulV4pwi0SFwfOj+4Cp6pmAcTg15zXMfVVqyQqSzr2DYSh62Xgo1u93G0aH5/ggpkITndjfXcpnTYU/6ceZTi21FCJNfEZuvca3RziVP3gavxp29r7Vg8BsAn56Rez/Snoctc0RaYV5RmJZfXs55v3QzDrl8u0KW4/mSfUbm07LPMHRq+IC1M9DeoMMudh1hl0KwmbddBxg6XCRgPJYt5wFyGGNdmBrCNCg4HuPCPo8yJ5fGKEyZ682Mw1fp6PdKeqLLQS9P8IvFCKzbMB+aA3eqScxBrru45gByuurPRdlt15q2LvcWthCGsJSBdiXPOCD6OsVKLjUoASX8PomSiCdX55k/OGo2elx6FK70bOciO8I9sRzIiCIdsie2EZh3nw5zf97tBxhPaW/waGLGOVvg5ON5MOaP3tVflI4NUDMCp8cn/KST5ccvZ8wzPQVNZ+uZKS4aX9o7QKV8BGkc4ZKUpGGJ9wAPbu2y53aU80dOMDSwYxGtgb+OFszWkYrNkAtMIE7iV15yggBGI3x6gDvUOs7ZMUDpTiEeAog7fnmY0h16HFZ7zO5FI6neIrNIh0JKGZd8DbUGHStCmrJZ5437hRaBrKQTnzRxHCeWhwKB0aKiageCrESo2pBI94vRdkyk6MLg7iZVSilBBpi3oS7KYutSNt71NgYem5/Zmz0HZluxgLjcB3E3nuUWzkxUE7rpoQThBgJHYuusdDaAXc1pdelwyTrAMSyJY6C69FSheGNXqJSCazWODx46FRLuiSF8dcbAyu5ADitaZKHhpYvGLVxDlRYlhuH4nKd00ZJZfM6qstyJcLemylshlbCbNs0vp6Z5CYbtFqgNeL+fLpZdr2BDPJ+Pi7RTvto8pbGYsVzMo+JuL5Tv5i6tf4QcgzuDHOrIErbqKqtFlyPeAJoirzTasHduSLicYPYCDkgSQQ4SlRwLuGABBqJthtlBaDB4cuxop3KZ59VkcytKitEkEFriG6XTZSYsJx+/lnXPeLfljCWh3PZKpDmcXtlPzLUYBWvO2vit1fUbEB0r5Tp3YEwed3gPD4PJKq7P8IVksUJj7vatFAXfEz+eflAq6X5NniRGClU6B5NyJrdvu6NUTzs+KsNwulh3m3+M97uMA8sUt9JQcB3rPZnDxYlbpu+QZcr2hvR5sSqbeXCD1YM67UyrxU6qQotRA1YOO2sVXF0+FwkU+ua5Oh63LKsUj4VneIyYugSkPCz3kwpzeI0K1MiPwZlejaS5Cbgz31RpaBbT6iAk+9enbSmxbtwrB5As7gIPFKcy3NYDnIpnwIMDQ7JOooev8nX57Fc0xIIYgOc35p6OEodl5kGb6JFr0tosxjxWEvK+R86GcgZhIk4EcPkogRZVGu14MXofdLopjZB1vfIhUQdTd4SKYackxrJ6z6FdpEkcEind1sjiMV3M+zPx1dWSmCo/oEVXvERz9CI3SOVAtevihZZiY1XKIPllIimC8Ji7H4gcVfsIlBPcZYzGuwrltOtffCacu76xGG94oM5aIGv4lBd82lFctpMjhPJo4xURxdPDXD9iqweuOIc16871Z4FrHoy8X90zSDGQROduTXjXxzM6seAhHegivaUYd5avILN1kvJI9LhrdMQnxkFdSzqZRkfqXhDgvAhMFB/GJSkV4YGxPQdIPHUVn7GCgfUZRI+4c9bo5AAJIm3/QdJu8mJzJs5jOVDYzTmySe30Z32nM4NT8jhTNIObtJLzOKosMlSpNOeKwuJjHyEflioyCC7JuRes6pRR6Lh8mkqrqzt3dgOQO7M7VAvaI7Jfzd5tNMYITrUGwVv3TvNvbInbZUsnqi80TYn6WcWQRN22kHPRIjwFMrKocDcArLjk/cRjGPZOapTC5JC4rTqN8YopPsmbb0ecf4s4D4wMw+hovaANlCT6SLlSrUFy3BpH5362ECAbW1sG0M8x4+xAQWSDpYGk6qc82FmJxi+s3Je7NXVsq3LdwXt4/kGS8a7MejM4iAeTHzQEmvxN66qmtUom29yJZGdfQJcTpsL82h34dZDoQXQmjng6TlPK5GZJOcT5aRnZ2AgQHLPEbCNRubpefV0cn71Np5ERKE2Ad/LtfpWMxR9uM4JwNtCZt+SUekoPyhJk+qAuKNehrHLxB29pH4F0oBcGsCHlacnFi3C+3IJd3ZVVozkgMp5oJsUSEI+xKMJ4p/QRZDIXu+mIRAewUgifNZ9NsliUdxKSUfd2RJ27m0t0dVYUwtqDxmYR+Vt+suJHtbu2q7OP7glKc0vdaWhUPN5Nz7oe5xs5x6ReD4oH+zkTaAe9LXxeXodcCS1Mo498qIeFwojWNQzVM1EGBB2xaLq2ZdcWRR1bcwmVHk7hxEK3OUHxfErRwbBb58LM05HnzEZ3d0skQm1MFIyTQ3P3rrZKa0fuoTc0e+TWFfa4CdZIZ1W1AudFLIBw+SE0HBZcp5Zfb1r3NIQtXky9HTOCUx+38jh1KXQifTbOIH8qmbb7cvr2TEgex1PWYayqsSrmCku1fu8vtMRK50al8X6ZgIenorR4EFyApdwWJ2+L6E70ULMMAw8oOWPtNggqbKezkIIsKZcgJenIUJV4NemtzcwLFjwmX73OGxOiyrkZzxGRzXUs70GV1NmHQc+dCrxRdVAlGPmNXQFpf8pNhIA8YCUs2FBP+W3jJKYYx0fGecjWDxp4RgQGmFdpbO8c+hGiuVqtOszFutEQRKO8GNaKJLytAU68puVB1YrHs1kRjpEFdzfqFEQ0ySUjLbecbCbk5LE9RkteJIywjwxtiB6A3uKNT6/1yMrFtgmEMZz1BtY6JVozc3ArgSaAQqqZzqTmqkqvpTqc05QPhtw/pEA501XVOBPljrYVnLvHtl/woAGxpE6cudcEqTver7CTMzG32lt0VnETjBjHhcM6+oa17oypeZL2gDRsE11dML66ICfxCp4NIxRbylbPk1lcjdLdnDJk/QY5Vvhz9WkVRw86Z08Meh0Sg6py9Q4kTltE0GU+G+WLsqL5XN2tqBh1tj2dQg3LKyN3t2k/8p8rrnhjZS4w4IKjtQv1zDEUhUXOyCU/BsXHvLHbg76ke3d6dF6jAAvs7hQ5A5fZMc86En8AiCQlwtsjgXjRzi2XLxw2L2LM1iLBbu58KiIZ4Uicm+QcMN7bU2F393SDm7kHkNyBCKQcO8KjCcJCMu3oBOaVh/1w2h6cGcvIMsEFHL10hWreRWEJ08m4UmtFAqhW2vVBLAUJvU5nP17XL+DUPh8mhaBDxD8vGN62bWX6UHA/O2UnujfxEno0jpcjY2Gl48yUU4Sn4ooW5VQYWNKsopqiaJY0nq3EtruWj8yZ0RBpmu24A+i82XQIyMwnXjRFDiSwVpxHP2ihqey+ZcKrylreZRYEraDDIp7C0ZHUXMhXgj4v3qoq0JvSrSYr/twm5d6ojT0i770KJZwbnY62bmItUxAnSpzFAI9Iu7tav5rIepMts/EYNtZBptEjilIIEDV4SAcDohy9E194pnzmCSiFCeeYMpOyUrdtfM4IeUVxgeFYobu5CGIttFdQeZsjPi/fo3toT5kmM9xDsbt4jnXNmrRL4dWR9dZX9jO5KhR5mzeDtIFq0C02xMsIutd9XeWppr7tGydvobpbV8V20keOpqobPA8Yt/0l4Qap0fY7OShcF/BmTkvbBRqAXWWvV1JHO2LEgzzWg5AZWc4WuHWcZKOvN+nicovjIfyklBfASXuTBFjOAdDCZn1CuTwfHb47cBO1a2iKyQjxfJnZd1CbfAkOgIYia0dw09ykKZSw+wtwKYbVeTqLAJYbR+MQdfEbqqasbCxYmWifrSnviuaVfM5ScjkOzLbher6dsoqVpneMS1xjlCVr3ayg1Z6lLyXdqlYFJh2uW0U5f7sPJtPWyqbqpU2D2DimtlFJKvBk4IMCcrMPEoutMUg8o8+LpuYXBq5iErZKmtCTKxqVnIUOyUa6iFSc9/6uj8RIa2qqD/iNCcUVvD7vR+yg7vpxFc/lrXcf2MGtIAM9N7+NzpFAf/l0PAQFPyDv6ByhDxD2F+oc33lY8n+czvE1N/WldHwpHV9Kx5fS8aV0fCkd//VSOr6Uji+l40vp+FI6vpSOL6XjS+n4Ujq+lI4vpeNrbuprbuprbuprbuprbuprbuprbuprbuprbuprbuprbuprbuprbuprbuprbuprbuq/XnNTX3NTX3NTX3NTX3NTX3NTX3NTX3NT/+a5qQiOfSB/oSg+vO4DAv3tk1Px/wJF8Wty6ktP/NITv/TELz3xS0/80hO/9MQvPfFLT/zSE7/0xC898UtP/NITv/TELz3xS0/8mpz6mpz6mpz6mpz6mpz6mpz6mpz6mpz6mpz6mpz6mpz6mpz6r9fk1Nfk1Nfk1Nfk1Nfk1Nfk1Nfk1Nfk1Nfk1H+jdCSADzD+C6kjCgIf0L9ydip+9RTHnUCWwCJf7cVd57r/Af8LlI4yciodlY9KRzpZFxN6KR1fSseX0vGldHwpHV9Kx5fS8aV0fCkdX0rHl9LxpXR8KR1fSseX0vGldHwpHf/blY7tZ0rH8u19NJyDxLlP9nRw4CetI7NR3sfJqQ+bRl2U52wb7aXqIkmnb9tuJNjEkTDYtCEsTFhbwGofnEoqPS7Woron+koIofxOdHivBnWN73l/rpuAxH+YATrhMx69TQ5N4QTGJ/iKozP6Qw3pRr7lhDiMtxB8poYECtzZns/DHHfvlp6Aan16b4egrsFR1NlsomuqFgWxjEYZGKIbtdGVWlCUtnB49Oytldhq6i4KbB6lMjT0Dr1tlVRQ2vGGZ/js3Z2gc/YeMNTbG/DGoQu0kmjN0LW59h4YsHb8J8UNDwaZ+dw2SKXzPCg/9rX4AcRIWyJvi9XRQRre6FojsI9Xe/xfdvcLioxi+1ythN/RtH+cxYEkjs+gv8WT/8Baw86OaD1iFcaeDQWutM1zxE2mwjiOJ1dqc8B4oqlEv69SQadbzni31g29+v5cEY0vz859dzkT+Y96xy7g+RJi8yOj4jxsC6VggHCmfesuDAp46jPP6gIjn5Nqpv1hoOesyyMSyjJwY46MLkcUlxAE9+RG4w+TPzenbYM4RlePAYNUaLsOF5VMzy/LsqjGHihK0nn3J3HPP148Hw/lOrBXvgziT9MezlGsfDaA0LM5rrxrjx+dDAOGbhyuGA2pdpV9fhqcF0BMReoiW0K3P/mhmeGhVKdsaRse0+YJWoJL+0TR6CrcMiW+H/eDKTU4TCUutDpMV3SNOxu/dPNkV/2Op4oW68IRMJTTHT1/EHQN4VhI0qZ9nZiW5tvSvok3HCM+TpZsthy/PtbFO3uq29u8zdP+p/ulBGppC2WJqXFqPfkbm7PpWDALsrsA0NP+4WJrzp8TRTK1AcsYTCYyif3BmDuJ28fRVmOWud3mR+8rNKuBVSlfrDByagF4oE+UwMuPViRhK+LGZ237xuEF8QWaG5QpYYkCZuJwhn13bhkrnQJIFAAAUOijrEdDGTQZvaNpvbpN21s/BubOWWFIeOeEbGEKa+HZajlS6iYwfYqenedFdUeCtyUAOZtMtkSJiUxDd+i4v2Es+5uxFvKinXk80iASO6pZciVJkgLGrhYTtSmBi8H5QrpZ94JVEh0i57r0g9hry83OIZxXJX3xpl6Gk0qVJfOabDDNneIFLepHDrsaXfPxTI9sSjucYQ7UB6iNZAnrkYcY5uNExSxJbkQ3IrmumtpkFwl0FWGri0dH4rzg+rwn/6+y62iOmFeO/8jFHI4My7DLnMkbc86Zv96E3vvK5bJ9sEoXSbsrggBmunuaA+e4z0O4n0t5h9ASqS45s7z5R5APX5TUKksRCAoRjZw1WBam1a5UOq1/YuIO9KSp/fzKePajMoZeWQ9BIGjectL6QxCknApkNW0H3zzcHeXadLIH5BKwnAmO56vfX5deRJQVEqVx0H+yHWWoEQ+gT2XVMKAoN5arQ+6eHSazF7QZsJp5DAvPJlkWGVMNtZnurNyzAN9a35hpfHzhPthc7GiyAyvFgf51f1KXB96OI3Ehg7tPRVA35qbUd7ucIkJgZjJ3WZpmGVQBhzKIMvI3IlLtjYW6R3FjYfSPpT4RhakVS5OkItWwoBl8aJNMnDA/auXU0jAst/nQZBAE9aalRfF+3sGjLu/MVqw8vVuyEyn+WbxC1dXfBYuRZWTybHma1nVdFsdR2sq62C68nLyXSyu97q2GKoQcvsjTNhdGIxWV5mJSsl5lcz5f3kVn84VP1qjbVCWW/RbW+bh+7xKo6iKVKwW9CsA9M1k7IkkoZv6r51p+M4J2CFJTnRH3TYX+M1LsRKj17SoMWbU+oof58I7AS7dkKrkGdSklCX3tEncmaNY1TTnpxVDbDIVa/0ZL/lxP1sIUQ+J0obdTmVWizufmrPhcK+5eiP71rh9zJwuOSoZRO0Wec6oJJ97ioZJlWalwdi8EMaQW1V/IfdHFKnNKflcty9Gm2qqM743o8/c/hPSZglVECTDDbuqdoeI4D50NUlPXaZKqX3j20/SJBqAlqhTjkzNBkuUEsfIh0DROJjebfqn3XjfSmSlVvPLWX7/hVke8YJAEIfJ45EkyzXRzh3SRQMWvocry/COV2J2vMA3f9w3gb6jDw9Av3aL/gj/ApEzs3Q7hVcZk0TTN8RsXh/N15YukDiOThCo6mkgM9uCT2Y5NX0WZhMdpDDdSIcNK5THM9N6J5g5/sogIRTRSW5swV6E6bJa9sDNoyMs8xf2C7omXppwIBRTx/cC2A1XP81R+EbX0zdaD+blYiiDGNxtsi/kML5g8iiiWxeT9Sj2avh7n5tPvyoDOiiuVcFoNCvbiP7FbWZIoqnYWw1fEE5bytDam5wfycYpCMZI0haikKJO7NxopfbRBLv7dAtcQpNVvLPmQ3phHB8e34dob2MF2YNH/nNCpFpKBroqoMIgRYm/Y2SEUcDEq8SjRb4Gz/q+lOffZWYdss/WscAe7uo1n2Hw2Ye/lIutK1Jl/uDTvecD2tn3kWkWwFvnn+oEkPXms9nPdNSMpCkKvXDcMCKe465v/Pmf2KyMFsa56vcxmPaU0EH/3gTKZEZNMOvthnA9n1U/8ZKp5V4/7UVi2Ca6TZcDa03VdVaFkQ0mIxQaGXTe83eesSUskSaz5qq4/y92Zo432kk7JpJTzYAv8Tsd7YpHY4lq0+vdTJj8hy3iok0CXYPEZgYd3zqvT/2RAty94Z4L0FzG902xF6gev3jFcjDO4aduydhDGLq5ZhFPi9CQFdvxp+IUc+n5z4jdRnHPnLwOz7vuHj42lyACjQJAe0pxNJKxhsLven6ubZpQqMwU2J0ZOOni+wL3H1dTueuZfIXE50B2Fp4FJuryVZbSaIdG+7Z38NFG0JlP6s+++a+MKV5/juxNSDSNyRu6vZVI7wS+rbYtJuvcJI4Z4rV+oRix1lbyRCQAxtcM831zKxasveXO4SOeZ0gvFof4jwrumfjP8z5eNtsr9cyJ8wAsE5l8sC6MGT39Xv1436cgIW36R0LqOO1zG+SHufvMruNP/rnAbYoMWXScCBcQQMo/drfOh2okuiXVuoDBMBFSZqpnKsxuzstrMikxFerqQBhYJs/sYipJ+Nxy/42VfgX7q3G/DBmHzF++e33Sl+CYsdHa/qaCi2nkjJoqVryQVvxuqddH0JpJNXE8eZcY5ba4iQ1G2tJX2SfzGRuKfZof9hripmWedOizule8LwQWChk92d+Q3cFq4pCBccl0UmkZTsj/anloSaKweadO2zveyxt8hmEryKBEEKDm7QW+GuZojIAd67k0ficOw9xKja9bru2T/dExTvH+fJeD7f73cGdfNh8fOgmlbO7Ll2zs4oQnvobTgwBkSrJ7tidrnug484WnCHQeOg48oZse5tE9zuZJTCv48X6G0ri+elnJ1ZWlrKykMhWV35QTM9Ywxa4YNITK5yzb1xzxP2FJi9RwvUUDUZ7I+v1aAJX+kESvv0DQrDpTHXHYoMfmUssWu3XeHzGNGe9daNaDpIIuXJJTNKTTakj7tSy7M3O2bEL2B6O8Xp762P6MIqv2nrW5uSL1SnaMnfILkZfi82VAIjoojW4FiQYkS7U7JQMoAZkFP8GF9RPakLea5K+9ZYFlIjIvhxhOv6Yrl+BnQXr+ztF06HrQEi1PEuQBssToqg41C/1x7U5MvkrryFwDA4PbHKMpboFpnmidp2dP2GZdlymgnvKrS4wgctIlM7M8prJ8lCjGKVj6T/qX2NyvgY2CwL2KC+4HL9GoYOQXPHCUIo6dB0UAf3heVhvUckhRk6cALsgG0Ao0phL/9BEFUEH+EsIaB5HJH51n3gfliGIkNN0nAlFGUnIL8e+YkU3Rsf5juEyBQ+0zVjf1ryRBkTUSg2aBHbYIkXX3X4CpmL7QSH6ie+MZeGsWLukEVV4gmtW+sL2KUS2r1Ntxk0ONEG/KOlL87V3kMLG+JJ/z8wLv+Ole/3wIxBCP/i0LkNMT7SyGIe6P/RPtiW7R5sgtQ6eIri5/269Pz2MfpH94weB4K0eSvBTQ5pwPWOKC7raKE05kVZAxxkexpQ1V1f2bVCh3tNzSfKdOI57j2Bwd1ttNk6AKiNeA7EgzDtFJjQSAv5Z8pNqtn0qw+4QepEGNmkhM08rTJL52sc40rANXO/PKwcJwQ075FnI5n3JxfX8GDiNnC4VF6hNrivvDkofLLO5+5uWsm49+NjWc7aSd4xF5VIm8ZF+KUk5Tup1vliWrKMDak529ulnYM1dqUQ/GOz2IDZxBkSIV/I5GAauLzHH+XCivJQi9FpM5evGmy44lITRP7hwAcSjv40ZVHnvMK5Wfc4vB0yXFNu0K1Tk7wKTIsMhZePNA9t536K3xNL4Hnm1Zyj3jaAMrfSbx9IiLxNwKUxUGlrEPXNLPgown1DXue53JSF/45hmKufaBsP1ynEXvOk+a6qrU5HKDECu+8CQlyfzvgUxQwc8/3uqqu+Y0BBJWnEZ7xWjsCma1KHRnqRzPLMsu6B+pMPCQS9Iz5dde7BYKfugmr3mWVJxLNl0sG8yvQoBBBz8afUPNTgj5Am6qKCCLeD+GFOZCOmr1w8HLeKlHMm6nSYoODE+vllMSoGxBBp2+aeV62sZK4wJozdYlgmzvrStEi/W8dgJXqvsg2d29l0kgXZPpJxWjrV/nJSU6dRPiRTeWU2tBB3DwwRXjsamYdG+M29cXZhW4zxs+L1+jIRR5kwa0rhb40BuuzXifMhu/xoppyFzhMIK4tHh1SeVFubGCcslaspn8+aZ7HZlnbXrm4kr9ML1vowLNGHyIv79VZ2LxCL/ybX3mmP6Gbvsnb7FmLZOiKKQxVlT1GKOVri9Om36FZg36LRrMK9E7JzEVKLQBVrNfHMxxEUHDsYfXECsfQxTy6DawqINWhWSMB6sWaziDVQBfxvOwrefR9H91izO4WUcIWgTOpe7cca6sRvpLvFjMzVBne2NA3QQE4DrFxyfzRJegLWyMUfTtSkWVVrPS7BQLC5XfI8GBYmFtRVvY141Oz6mVCxKpYi7383EkVVll41Y41f5+htf/w5DTd5uZHvF7DQE2y/8KguQuls8uwpziSUSsTQtpTiW3Y+N0vsTHpL6pMMAGZupXuvwkUD6Zcty0LsaxhYVXa2WbbfsrhJRZ+3k4DbWNbjyZ1lb0QSapA/KFomVjwVPt+v0jfw8CzLHbAt3OC+nDO9VxTngFdzpU5tbdqD9IbC7OE4AZ31YAohvCkoPuCQ4C+x+ax79i4rvwyxjzLqpz6Jt1FSVfegXXSfLqRkJi14uBafJLeHVCVHIuhmGG1ryLz4wZIWc9WriQ3nGyDI+PRxCfNDhgra3coIm4Cv8kxcIT8hOur4Y2aMH77M1F9B2BAUUHBVIOeyBd46Wlx4KVBQdXThGdNcg6/DEWR9z+D7XlANqeTcITvSZUpgW/+hXhvdZ61XRKRMr4hXjT887SMZmpjwu9sJaMb+vRxK+DVc++/rA8qUFiaazYOT/X50ZQw4qdybSCRiuXN+OX5mCar+QuzXqD5LjNO8/zCMF7poZETLHIfUiL3vl3jHf2vwA89pDm5OVetzpueJX6UFRdhVGFESvy348C8JxwU/hGDcIyN4kwxO8aFi+ddm71IApiEkLi1NUmfgRQojCemgAxFUev4M356XVR3ieH0b7S4Dn2T+c6ekXqNG3KXmgHbdN27rfn1FC7M9pjfJFR66Vr+2zYas/VvrT7+liohwT8FM9bJVgFkCJ/NYj06mAR1R0mUmPeD21KMlfMzVfgc3xfTk/mhKZHVDhhOAmPtr51lAp3XBucIoEtK4mu7ScDQjPMEs+ZKMoMjKZnppdmXEbqsH/5YR5dreX41QxPqPz70SCzVpXFMHWpwgVivPaGEvnxv+frYAIeCvpXmPrO8GKZZNpUe8eWC0kz3TjZLyrMgWhOkmZB5+GkgItOk3sO8O2WdNM7K1LcQUCA5LUXVPFo1HQet6xdU6F1VYdv0KKGDERCxrRsopAgngmabwqYPMFWSP9YFjwX3M4OZ1MsiJJ6aToqYz1+SIUIGytdcxfT5zrPH8sU/dQk0Whm3wv1H7fWbdOE4joqpLj9XUSxQXJ363zEsd/80n1M++CN6uspOj87CpL+25+BaW/gb4S/OHxlFn3KRt9Y+1bNDEb5bjLIp5TRghxuG4ZExyfE2m9CTlQ7s/NhB3AfglJumtM2MBx55bam+i0LjL9Xz0Scu+rSN2po9gk0RzXupEwto2P3a7c59xduvjpfWV4RSGUbk73QtwVXSgA5NPHzwKeQpMRuJVRWmboJ6nriMOauH6Bu+6O2FXsMJteuhB+ZWIun8DT9uriYW2yiIrZibofnECG7zB0Xt06K4xOgx+1zCEMnGL6kdddDolP/lpVo1acIxTru8MUU59PvuovTvnRn4bfNgfvBOb5R2NLSl2qkfl6wXUZZGudDdB2XBgGmyP8OoouwAZV6QqTViyVbRMqG/so3terr1w7lQloFd+R+39v+wZv8vBu7/061NQf/dqo1C/9OojUD/i0ub/A+M/H8btd8fl/EFL//1N3GJp0odsxy84j8B</diagram></mxfile>
|
2207.09666/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Image captioning is the task of generating a semantic description of a scene in natural language, given its image. It requires a comprehensive understanding of the scene and its description reflecting the understanding. Therefore, most existing methods solve the task in two corresponding steps; they first extract visual features from the input image and then use them to generate a scene's description. The key to success lies in the problem of how we can extract good features.
|
| 4 |
+
|
| 5 |
+
Researchers have considered several approaches to the problem. There are two primary methods, referred to as grid features [@xu2015show; @rennie2017self; @lu2017knowing] and region features [@anderson2018bottom]. Grid features are local image features extracted at the regular grid points, often obtained directly from a higher layer feature map(s) of CNNs/ViTs. Region features are a set of local image features of the regions (i.e., bounding boxes) detected by an object detector.
|
| 6 |
+
|
| 7 |
+
<figure id="fig:tradeoff" data-latex-placement="t">
|
| 8 |
+
<div class="center">
|
| 9 |
+
<embed src="figures/tradeoff.pdf" style="width:100.0%" />
|
| 10 |
+
</div>
|
| 11 |
+
<figcaption>Comparison of GRIT and other region-based methods for image captioning. Left: Running time per image of performing inference with beam size of five and the maximum length of 20 on a V100 GPU. Right: Their architectures </figcaption>
|
| 12 |
+
</figure>
|
| 13 |
+
|
| 14 |
+
The current state-of-the-art methods employ the region features since they encode detected object regions directly. Identifying objects and their relations in an image will be useful to correctly describing the image. However, the region features have several issues. First, they do not convey contextual information such as objects' relation since the regions do not cover the areas between objects. Second, there is a risk of erroneous detection of objects; important objects could be overlooked, etc. Third, computing the region feature is computationally costly, which is especially true when using a high-performance CNN-based detector, such as Faster R-CNN [@ren2015faster].
|
| 15 |
+
|
| 16 |
+
The grid features are extracted from the entire image, typically a high-layer feature map of a backbone network. While they do not convey object-level information, they are free from the first two issues with the region features. They may represent contextual information such as objects' relations in images, and they are free from the risk of erroneous object detection.
|
| 17 |
+
|
| 18 |
+
In this study, we consider using such region and grid features in an integrated manner, aiming to build a better model for image captioning. The underlying idea is that properly integrating the two types of features will provide a better representation of input images since they are complementary, as explained above. While a few recent studies consider their integration [@luo2021dual; @xian2022dual], it is still unclear what the best way is. In this study, we reconsider how to extract each from input images and then consider how to integrate them.
|
| 19 |
+
|
| 20 |
+
There is yet another issue with the region features, usually obtained by a CNN-based detector. At the last stage of its computation, CNN-based detectors employ non-maximum suppression (NMS) to eliminate redundant bounding boxes. This makes the end-to-end training of the entire model hard, i.e., jointly training the decoder part of the image captioning model and the detector by minimizing a single loss. Recent studies detach the two parts in training; they first train a detector on the object detection task and then train only the decoder part on image captioning. This could be a drag on achieving optimal performance of image captioning.
|
| 21 |
+
|
| 22 |
+
To overcome this limitation of CNN-based detectors and also cope with their high-computational cost, we employ the framework of DETR [@carion2020end], which does not need NMS. We choose Deformable DETR [@zhu2021deformable], an improved variant, for its high performance, and also replace a CNN backbone used in the original design with Swin Transformer [@liu2021swin] to extract initial features from the input image. We also obtain the grid features from the same Swin Transformer. We input its last layer features into a simple self-attention Transformer and update them to obtain our grid features. This aims to model spatial interaction between the grid features, retrieving contextual information absent in our region features.
|
| 23 |
+
|
| 24 |
+
The extracted two types of features are fed into the second half of the model, the caption generator. We design it as a lightweight Transformer generating a caption sentence in an autoregressive manner. It is equipped with a unique cross-attention mechanism that computes and applies attention from the two types of visual features to caption sentence words.
|
| 25 |
+
|
| 26 |
+
These components form a Transformer-only neural architecture, dubbed GRIT (Grid- and Region-based Image captioning Transformer). Our experimental results show that GRIT has established a new state-of-the-art on the standard image captioning benchmark of COCO [@lin2014microsoft]. Specifically, in the offline evaluation using the Karpathy test split, GRIT outperforms all the existing methods without vision and language (V&L) pretraining. It also performs at least on a par with SimVLM$_\mathrm{huge}$ [@wang2021simvlm] leveraging V&L pretraining on 1.8B image-text pairs.
|
| 27 |
+
|
| 28 |
+
# Method
|
| 29 |
+
|
| 30 |
+
This section describes the architecture of GRIT (Grid- and Region-based Image captioning Transformer). It consists of two parts, one for extracting the dual visual features from an input image (Sec. [3.1](#sec:extraction){reference-type="ref" reference="sec:extraction"}) and the other for generating a caption sentence from the extracted features (Sec. [3.2](#sec:generation){reference-type="ref" reference="sec:generation"}).
|
| 31 |
+
|
| 32 |
+
<figure id="fig:overview" data-latex-placement="t">
|
| 33 |
+
<div class="center">
|
| 34 |
+
<embed src="figures/model_v3.pdf" style="width:100.0%" />
|
| 35 |
+
</div>
|
| 36 |
+
<figcaption>Overview of the architecture of GRIT </figcaption>
|
| 37 |
+
</figure>
|
| 38 |
+
|
| 39 |
+
A lot of efforts have been made to apply the Transformer architecture to various computer vision tasks since ViT [@dosovitskiy2020image] applied it to image classification. ViT divides an input image into small patches and computes global attention over them. This is not suitable for tasks requiring spatially dense prediction, e.g., object detection since the computational complexity increases quadratically with the image resolution.
|
| 40 |
+
|
| 41 |
+
Swin Transformer [@liu2021swin] mitigates this issue to a great extent by incorporating operations such as patch reduction and shifted windows that support local attention. It is currently a de facto standard as a backbone network for various computer vision tasks. We employ it to extract initial visual features from the input image in our model.
|
| 42 |
+
|
| 43 |
+
We briefly summarize its structure, explaining how we extract features from the input image and send them to the components following the backbone. Given an input image of resolution $H\times W$, Swin Transformer computes and updates feature maps through multiple stages; it uses the patch merging layer after every stage (but the last stage) to downsample feature maps in their spatial dimension by the factor of 2. We apply another patch merging layer to downsample the last layer's feature map. We then collect the feature maps from all the stages, obtaining four multi-scale feature maps, i.e., $\{ V_{l}\}_{l=1}^{L_b}$ where ${L_b}=4$, which have the resolution from $H/8 \times W/8$ to $H/64 \times W/64$. These are inputted to the subsequent modules, i.e., the object detector and the network for generating grid features.
|
| 44 |
+
|
| 45 |
+
As in previous image captioning methods, ours also rely on an object detector to create region features. However, we employ a Transformer-based decoder framework, i.e., DETR [@carion2020end] instead of CNN-based detectors, such as Faster R-CNN, which is widely employed by the SOTA image captioning models [@anderson2018bottom]. DETR formulates object detection as a direct set prediction problem, which makes the model free of the unideal computation for us, i.e., NMS and RoI alignment. This enables the end-to-end training of the entire model from the input image to the final output, i.e., a generated caption, and also leads to a significant reduction in computational time while maintaining the model's performance on image captioning compared with the SOTA models.
|
| 46 |
+
|
| 47 |
+
Specifically, we employ Deformable DETR [@zhu2021deformable], a variant of DETR. Deformable DETR extracts multi-scale features from an input image with its encoder part, which are fed to the decoder part. We use only the decoder part, to which we input the multi-scale features from the Swin Transformer backbone. This leads to further reduction in computational time. We will refer this decoder part as "object detector'' in what follows; see Fig. [2](#fig:overview){reference-type="ref" reference="fig:overview"}.
|
| 48 |
+
|
| 49 |
+
The object detector receives two inputs: the multi-scale feature maps generated by the backbone, and $N$ learnable object queries $R_0 = \{r_i\}_{i=1}^{N}$, in which $r_i \in \mathbb{R}^d$. Before forwarding them into the object detector, we apply linear transformation to the multi-scale feature maps, mapping them into $d$-dimensional vectors as $V_l\leftarrow W_l^r V_l$, where $\{W_l^r\}_{l=1}^{L_b}$ is a learnable projection matrix.
|
| 50 |
+
|
| 51 |
+
Receiving these two inputs, the object detector updates the object queries through a stack of $L_r$ deformable layers, yielding $R_{L_r}\in \mathbb{R}^{N\times d}$ from the last layer; see [@zhu2021deformable] for details. We use $R_{L_r}\in \mathbb{R}^{N\times d}$ as our region features ${\cal R}$. We forward this to the caption generator.
|
| 52 |
+
|
| 53 |
+
Although we train it as a part of our entire model, we pretrain our "object detector" including the vision backbone on object detection before the training of image captioning. For the pretraining, we follow the procedure of Deformable DETR; placing a three-layer MLP and a linear layer on its top to predict box coordinates and class category, respectively. We then minimize a set-based global loss that forces unique predictions via bipartite matching.
|
| 54 |
+
|
| 55 |
+
Following [@anderson2018bottom; @zhang2021vinvl], we pretrain the model (i.e., our object detector including the vision backbone) in two steps. We first train it on object detection following the training method of Deformable DETR. We then fine-tune it on a joint task of object detection and object attribute prediction, aiming to make it learn fine-grained visual semantics with the following loss: $$\begin{equation}
|
| 56 |
+
\mathcal{L}_{v}(y,\hat{y}) = \sum_{i=1}^{N}[\underbrace{-{\rm log} \hat{p}_{\hat{\sigma}(i)}(c_i) + \mathbf{1}_{c_i\neq\varnothing}
|
| 57 |
+
\mathcal{L}_{box} (b_{i}, \hat{b}_{\hat{\sigma}(i)})}_{\rm object \ detection} \underbrace{-{\rm log} \hat{p}_{\hat{\sigma}(i)}(a_i)}_{\rm attribute\ prediction}],
|
| 58 |
+
\end{equation}$$ where $\hat{p}_{\hat{\sigma}(i)}(a_i)$ and $\hat{p}_{\hat{\sigma}(i)}(c_i)$ are the attribute and class probabilities, $\mathcal{L}_{box}(b_{i}$,$\hat{b}_{\hat{\sigma}(i)})$ is the loss for normalized bounding box regression for object $i$ [@zhu2021deformable].
|
| 59 |
+
|
| 60 |
+
This network receives the last one of the multi-scale feature maps from the Swin Transformer backbone, i.e., $V_{L_b}\in \mathbb{R}^{M \times d_{L_b}}$, where $M = H/64 \times W/64$. As with the input to the object detector, we apply a linear transformation with a learnable matrix $W^g\in \mathbb{R}^{d\times d_{L_b}}$ to $V_{L_b}$, obtaining $G_0= W^g V_{L_b}$ We employ the standard self-attention Transformer having $L_g$ layers. This network updates $V_{L_b}$ through these layers, yielding our grid features $\mathcal{G}$ represented as a $M\times d$ matrix. We intend to extract contextual information hidden in the input image by modeling the spatial interaction between the grid features.
|
| 61 |
+
|
| 62 |
+
The caption generator receives the two types of visual features, the region features ${\cal R} \in \mathbb{R}^{N \times d}$ and the grid features ${\cal G} \in \mathbb{R}^{M \times d}$, as inputs. Apart from this, we employ the basic design employed in previous studies [@vaswani2017attention; @herdade2019image] that is based on the Transformer architecture. It generates a caption sentence in an autoregressive manner; receiving the sequence of predicted words (rigorously their embeddings) at time $t-1$, it predicts the next word at time $t$. We employ the sinusoidal positional embedding of time step $t$ [@vaswani2017attention]; we add it to the word embedding to obtain the input $x^t_0 \in \mathbb{R}^d$ at $t$.
|
| 63 |
+
|
| 64 |
+
The caption generator consists of a stack of $L_c$ identical layers. The initial layer receives the sequence of predicted words and the output from the last layer is input to a linear layer whose output dimension equals the vocabulary size to predict the next word.
|
| 65 |
+
|
| 66 |
+
Each transformer layer has a sub-layer of masked self-attention over the sentence words and a sub-layer(s) of cross-attention between them and the visual features in this order, followed by a feedforward network (FFN) sub-layer. The masked self-attention sub-layer at the $l$-th layer receives an input sequence $\{{x^{l-1}_i}\}_{i=0}^{t}$ at time step $t$, and computes and applies self-attention over the sequence to update the tokens with the attention mask to prevent the interaction from the future words during training.
|
| 67 |
+
|
| 68 |
+
The cross-attention sub-layer in the layer $l$, located after the self-attention sub-layer, fuses its output with the dual visual features by cross-attention between them, yielding ${\cal A}_l$. We consider the three design choices shown in Fig. [3](#fig:cross_attn){reference-type="ref" reference="fig:cross_attn"} and described below. We examine their performance through experiments.
|
| 69 |
+
|
| 70 |
+
<figure id="fig:cross_attn" data-latex-placement="t">
|
| 71 |
+
<div class="center">
|
| 72 |
+
<embed src="figures/cross_attn.pdf" style="width:100.0%" />
|
| 73 |
+
</div>
|
| 74 |
+
<figcaption>Three designs of cross-attention mechanism to use dual visual features</figcaption>
|
| 75 |
+
</figure>
|
| 76 |
+
|
| 77 |
+
We show three designs of cross-attention between the word features and the dual visual features (i.e., the region features $\mathcal{R}$ and the grid features $\mathcal{G}$) as below.
|
| 78 |
+
|
| 79 |
+
The simplest approach is to concatenate the two visual features and use the resultant features as keys and values in the standard multi-head attention sub-layer, where the words serve as queries; see Fig. [3](#fig:cross_attn){reference-type="ref" reference="fig:cross_attn"}(a).
|
| 80 |
+
|
| 81 |
+
Another approach is to perform cross-attention computation separately for the two visual features. The corresponding design is to place two independent multi-head attention sub-layers in a sequential fashion, and uses one for the grid features and the other for the region features (or the opposite combination); see Fig. [3](#fig:cross_attn){reference-type="ref" reference="fig:cross_attn"}(b). Note that their order could affect the performance.
|
| 82 |
+
|
| 83 |
+
The third approach is to perform multi-head attention computation on the two visual features in parallel. To do so, we use two multi-head attention mechanisms with independent learnable parameters. The detailed design is as follows. Let $X_{l-1}=\{x^{l-1}_i\}$ be the word features inputted to the meta-layer $l$ containing this cross attention sub-layer. As shown in Fig. [2](#fig:overview){reference-type="ref" reference="fig:overview"}, they are first input to the self-attention sub-layer, converted into $X_l'=\{x_i'\}$ (layer index $l$ omitted for brevity) and then input to this cross attention sub-layer. In this sub-layer, multi-head attention (MHA) is computed with $\{x_i'\}$ as queries and the region features $\mathcal{R}$ as keys and values, yielding attended features $\{a^r_i\}$. The same computation is performed in parallel with the grid features $\mathcal{G}$ as keys and values, yielding $\{a^g_i\}$. Next, we concatenate them with $x_i'$ as $[a^r_i;x_i']$ and $[a^g_i;x_i']$, projecting them back to $d$-dimensional vector using learnable affine projections. Normalizing them with sigmoid into probabilities $\{c_i^r\}$ and $\{c_i^g\}$, respectively, we have $$\begin{align}
|
| 84 |
+
c_i^g &= \mathrm{sigmoid}(W^g[{a^{g}_{i}}; x^{\prime}_{i}] + b^g), \\
|
| 85 |
+
c_i^r &= \mathrm{sigmoid}(W^r[{a^{r}_{i}}; x^{\prime}_{i}] + b^r).
|
| 86 |
+
\end{align}$$ We then multiply them with $\{a^r_i\}$ and $\{a^g_i\}$, add the resultant vectors to $\{x_i'\}$, and finally feed to layer normalization, obtaining ${\cal A}_l=\{a^{(l)}_i\}$ as follows: $$\begin{align}
|
| 87 |
+
a^{(l)}_{i} &= \mathrm{LN}(c^{g}_i \otimes a^{g}_i + c^{r}_i \otimes a^{r}_i + %a^{\prime}_{i}).
|
| 88 |
+
x^{\prime}_{i}).
|
| 89 |
+
\label{eq:agg}
|
| 90 |
+
\end{align}$$
|
| 91 |
+
|
| 92 |
+
Following a standard practice of image captioning studies, we pre-train our model with a cross-entropy loss (XE) and finetune it using the CIDEr-D optimization with self-critical sequence training strategy [@rennie2017self]. Specifically, the model is first trained to predict the next word $x^*_{t}$ at $t=1..T$, given the ground-truth sentence $x^*_{1:T}$. This is equal to minimize the following XE loss with respect to the model's parameter $\theta$: $$\begin{equation}
|
| 93 |
+
\mathrm {\cal L}_{XE}(\theta)=-\sum_{t=1}^{T} \log \left(p_{\theta}\left(x_{t}^{*} \mid x^{*}_{0:t-1}\right)\right).
|
| 94 |
+
\end{equation}$$ We then finetune the model with the CIDEr-D optimization, where we use the CIDEr score as the reward and the mean of the rewards as the reward baseline, following [@cornia2020meshed]. The loss for self-critical sequence training is given by $$\begin{equation}
|
| 95 |
+
{\cal L}_{RL}(\theta) = -\frac{1}{k}\sum_{i=1}^k (r(\mathbf{w}^i)-b) \log p(\mathbf{w}^i),
|
| 96 |
+
\end{equation}$$ where $\mathbf{w}^i$ is the $i$-th sentence in the beam; $r(\cdot)$ is the reward function; and $b$ is the reward baseline; and $k$ is the number of samples in the batch.
|
2208.07639/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-03-07T11:03:16.611Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36" etag="7C8bEhCbrfLdbF9angux" version="16.6.6" type="device"><diagram id="ui6B7TLb_S_M39sjcyPl" name="Page-1">7V1bk5u4Ev41fkwKcedxPJOZTe1uTs7MnpPsU4oYYrPLGC9mbvn1KwzYoJYxBoEElmuqxggsm+5Pre5W62OmXT++3sXuZvV75PnhTFW815l2M1NVVTFN/C9tectaEDJR1rKMAy9vOzQ8BD/9vFHJW58Cz99WLkyiKEyCTbVxEa3X/iKptLlxHL1UL/sRhdVv3bhLHzQ8LNwQtn4JvGSVtdqqdWj/xQ+Wq2R/f0525tEtLs672K5cL3rJmnY3p32YaddxFCXZu8fXaz9MpVfIJZPA7ZGz+x8W++ukyQfU+bcfHyPvy+tfmnf7Gnx7+Hz7/E5V7KyfZzd8ym85/7nJWyGDOHpae37ajTLT5i+rIPEfNu4iPfuC1Y7bVsljiI8QfvsjWie5GhG+x/k2iaO//esojOJdb5qye6VXBmFYtK+jNf7EfBm7XoDvh2j23O1q9/3pF8D7zkXx7MeJ/1pqyuVw50ePfhK/4Uvys3qBrhyVmpUfvxxUjBQtb1yV9WvnjW6Oq+W+84Po8Ztc+mdpwpGaUE0NasI2B1aEdYGKUO2qIpBKGRKIqgmrL02gBorw195Vaubx0SJ0t9tgUZV9VVH+a5B8zc+k7/9M298b+dHNa+mym7f8INNWYfyzq8sa1esU4HuV6QWKvyRegyLcoi32QzcJnquTEk3g+Td8jgL8S/batYyqdnXDwfdR6WQbPcULP/9ceRohunI0oiuTUH/ixks/AR3tELC/8fag0DQKKMwwybVSQYf5z1NUnHi33enrCl+A7M3r4SR+t0z/f3z4nDoXvrtY+XHRJf6JWa/ZNQB+eHQlVcBVB3c+Winj2g2D5TpFLYYM/j5tno7VADseV/mJx8Dz0q+hWpcqrEP3ux/O3cXfy107+eVNwXqGtXCqIFBN2gRKwbPWl63Q9J5gcR09bmIfG5ZoLeHRdFZXq/CwbAo87CHhYdLg0W0qWePf9bV8UJpM0sPDbLI7qk4nhQ48/4f7tIPSFKYZTSfMgkLos+kks3fIC2/EAfNVz9MMQkZ3g6Jjg7KzJIRN2d7fzaUJOWuGQbYBTAgNyb1ZEIRo8RkjQNxffZF4OG9KUSAe9EHxUPwgOaX0OaUgzWA0pyCNDHGHn1T2uJeTCg8jAiHAfVahhLPX0fp5puL+FC37h4D+zko/raI4+ImF6Yb59b1IlkwLKNDlVymS3ZvRHkTbILXdf/qom1Eft/F2CDc+TTvpLa23gk721bf11hpk6CWgevUGFEoiszWiHO6I0tn7kBJR5yEKkRNXB0Sh0/aud0TRPEyJqCERpZ6eqRojSj1t73pHlAoQdeMv9j4qctTsDcSZcF4qWLvi76Tq0P+/v766YyzLI5HWTNVujA/2jU7EerjdM3zb00FgWFms7kFBJNwFUBBcOajAX7VHA3/oQwsgXphmv2D8H4qOBNKQOaEBQLr8AogXFrlc8gBAApoomEcqD4B0aXks+AcBigDShUmVi8a/eBaq8MkmgH8QTgkgXRiwjxX/R0pVmShOPMNlwLgYKE1mWvrNtJzO4DZfHT6dB+w702LQSo46IYrUKE3romvZsffL9MdKjhvXLhN42dcWDKZhGEtJDeOwjLTubRWMe0KcNcx8z8IkNIzdfFaDGCGD9yhmXlgwCR1j+8pMx6rGW8fM1/onoWNNYadjjdyFObSOgYavQVA7gtIrUCtfFBVwC41gYDTWiJaFeshCdu7qqas4HFEyh9w6zF+wcKXwgnGvC2eWYHR9PcpVLF04iwKj2gsGvmGIph4Yko4T+IZwJgUGghcMfFMTTT0whrseZd2aKZxJKXwvifz0rCWcK4rqgtsxQd8SzqggGN4CMU503S9L7dTIJg8ts8RNzXV5pCRIVssidgUabbeY2uQaok501PsOpYuFpiBQMsiZwGm9Ig3iCNhV76X/Ek6c4USG+u3hBLzz4eHEvLxBwuk8OJkUWp2WcAIu7/BwYl5LIdkBO8OC3GEF2Dr6ZgeUJVQN6p5ac6iAAio0sHphwm0SbBn8o1r2O9GnMHBAZVp79iGyp6GHDmWjqVQwLCZrr2BQlja0gmVtaZNKsvYKBjVpQytYlqQ1KSPrQBBHFqQNrGCb+RwsA+QzHWV2ATLYbD98hGxLLhjexonkq+oEqNPJwL4BpUpA8fZXT6dxW9NVcQCUXLDiDSiSraoLoMjt1RwAdYKtZ0SVD2QqlX+OyJZcPTVcVQLoZ0pMPcKVVNmSqKeOqUoABdXz9IwL/sLV+NuSp6eOp4q/ggrETICmBwQm/IU7HZYeJgoSzjw5MDUxVvSDKIq/cCVJbR0ZlQAKkpWa3PMo7Eo1YXnI4HkU9uRmTdYaDwA6HP5ZgQwdQEfBcXIjArLE2opgnHq6buPqcaIjY+DH9PLhMuMBIE0oAJkkf25bAJEdDQ4gLiRLJQDp2KUsQeid8l45tBzB0e7osx8H+O7Tp6N1BpdYk5xJ8i22BpfDF1xIqSODUFOavpFECyQZhADEqApMQ9/dfGIsyppgQf+Abg0YLBza++d8EEELdawPY0I4yfoggmzlkwV5+8m0Zyw5pVe7eQmM4/pue5+lYM596paU3KYowGincF2M05KSNCIiyFaWKPK2pDQO9e6WFIxjvpaUQtoxdUtK7tAVYbTXUU+OqTaLZKURQbZyCYB3doT29NfulhSMY86WlP22+06YKayp5/9wn8JkIljaZ9a6ZtpAfmj4x5og2iMRzJ2qUrVUoGP+8xQVJ95tdwq7SucGffO601pxHr9bpv/vr74UfeEfl3WXnQGgxKY9qcKwCqD8aUXlGTpvcsNguU6xjMGSZnXn6UQRLNzwKj/xGHhe+jXUqakK9iM+QvHlPSQIVQIAxfN8S0DWKUDWzL5mKpP5TCXo0pBpCGVTLFZPWSE7Gjp7r0I3slzrM6aQnCzyN03ejqQKR+fHqYdKZCm/AFqQXFmcrSW5FdhC2HFSdKV4qe1sJ9jYcKLfvk1p/ZaRMZlSIFkBRjH0fCdvS8HGEAHUIJfreFdHkvvgmVlTRyhrCtfrxmtNHdGGcWFXLsqakvuMBFCDXLLjbU1JEghW1pTcd8XXmmpqrTUd0Y4eUq78BzGihPldR7FM41NGFOmfkBptzq2hUsbmoOMRUZMSMpE/UCIfIqCwffwy+UIEVlMwFCQFdzq6jXaWgkwpA5PTt5mw2Ocu5dRC2SRIVvO1fxAz8fybdI3YGBgztKoCRlPL9v5uLueW+rkFQEA3wNxCA7PWm4NqHa8a2G7cdSNE2BgR0NOYfyqhIetLoqGuZMBQYbhCdTT6A0MD6uOzwr6y1Gyg1zKBwDJ2vQDLkhA7AzHbRX1AMeYcKGZEiwp7E3MTAuJhxUyk2vaXsxC/JZz4G2TWxodyWzgxw7zShFHuCCf+BpyvXMV/u3uxEb+jCCd+6GnPg2QrqO9T1q3Kyu5b76tKMWjV+wZUSn/ZWAq5HlBI6v1tcpHcuo9BmP76X/zw2U+Fe5ZsVGURrdf+InG/F70r9TAmypH3T0YoSWwfQpRFRj4Ald0q1PHw4PvBn88agkODjv9++zaz5ld/zKybmfHByNpKEUFAfhy3fZ94iHAClafHVgkH5pAwKIqZu8LAlDBgAYPCxELrMCwqji9UnIUKS6KCISoQZdVzWFjQlitawMKWsGAIC5XCnjgoLAxa2N0CFo6EBUNYaJSk46CwQE6D1U2eTvk+bCnCGIvilCuDisxkNPHqciidhuV4LOyex64rLjSJC4a44O6Q7ckzu+JClbhgiAve4duelLcrLJCEBQtYcHPMuTyy+thGcTSrLzY66umNnoS6ddEq2ZE5cCmaweWR2DwAJDYJdWsAkR0NDqAG6zLTAJBYxY0kP3R7AJG72AYGkKleCIAE5zppDSCyo6EB5EyVqJz/HihH8pQLoISp0pQLIFohdudc8kZdkqV8t2ePPUv5iW77nqEkSTn/sY6KdZ7R21GSpFwE2UrGA96GlEbzwp6knK8hRcXwvCBLSpIbizDa60jKfx2NHSUpykWQrKQo551XoxG8sKco52xHEfvAR25AP0lR3jrFRmaGODCbUGKYjog5L017HqqOIuNkmlYtwgRBMGTBPP97pfRC7RBlk9A0lEZ4wvp130qXbdILtmf8/Pyptwd4Zj0yBatKCfW6uSNCbBMlNkPvVcZro5zaJOqbzjZRSzjxw2BoAii3hROz6Ltx+9wMLYD4G4REk90MLYD44erqpW2GJlRCyRJoFJX0x9VNeXr02UxQKo33579P7joJfmJ3NVrvsjh5QWlcXPABO9nRJhXOIvKC9XJcnFE9gMMifTIKOBBtgwkLIqBPt/+7if/z+Xbxc/FHEP1/43791XzXwCcYQZgmaPCFA/pK8KVWO2wefJGUF0Zdtz2FYrrDNhSjwvH47gVpqviaKhqlBytThQ/jKNXXAUr4Lle/R56fXvEv</diagram></mxfile>
|
2208.07639/main_diagram/main_diagram.pdf
ADDED
|
Binary file (52.1 kB). View file
|
|
|
2208.07639/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The image signal processing (ISP) pipeline is receiving increasing attention from the research community, as mobile devices are equipped with powerful hardware which can be utilized to process more sophisticated operations to boost performance [@ignatov2020replacing]. A typical ISP pipeline includes several local and global operations, such as white balance, demosaicing, color correction, gamma correction, denoising, and tone mapping [@ramanath2005color]. Since each of these operations is a research topic on its own, they are often separately optimized for a given ISP pipeline, which can be sub-optimal.
|
| 4 |
+
|
| 5 |
+
The deep learning-based approach has proven to be effective in various computer vision and image processing tasks, and consequently, many attempts have been made to replace conventional ISPs with convolutional neural networks (CNNs)[@ignatov2020replacing; @liang2021cameranet; @schwartz2018deepisp; @xing2021invertible]. While earlier learning-based works only dealt with the ISP components separately, such as demosaicing [@ye2015color] and denoising[@kim2019grdn; @zhang2018ffdnet; @zamir2021multi], recent studies have paid attention to the design of a unified CNN that performs all ISP functionalities, which is referred to as an ISP-Net. For example, Schwartz et al.[@schwartz2018deepisp] proposed a two-stage ISP-Net for low-level and high-level operations and showed that sharing features between two stages leads to a better result. In [@liang2021cameranet], correlated ISP components are categorized into two groups and independently trained, followed by joint fine-tuning.
|
| 6 |
+
|
| 7 |
+
However, most previous ISP-Nets did not consider that sRGB images rendered from RAW are essentially followed by lossy compression, which may substantially alter the image quality. Although some studies [@uhm2021image; @xing2021invertible] have proposed to integrate JPEG simulation as a part of the model to take into account the compression artifacts, they are limited to the simulation, and the standard JPEG is still used to produce a bitstream.
|
| 8 |
+
|
| 9 |
+
The objective of image compression is to reduce bits required for storing and transmitting an image without largely affecting the perceived quality. Image compression is typically achieved by transforming the image, quantizing the transformed coefficients, and compressing the resultant representation using entropy coding [@goyal2001theoretical]. In particular, the quantization introduces an inevitable error, where coarse quantization leads to bitrate reduction at the expense of distortion increase, giving rise to the rate-distortion trade-off. Under the principle of transform coding [@goyal2001theoretical], many codecs have been developed to improve rate-distortion performance, including JPEG2000 [@taubman2012jpeg2000] and versatile video coding (VVC) [@bross2021overview]. Most of the components in these existing codecs, however, are designed by human experts much like conventional ISP components, which promote researchers to design a CNN that performs image compression, which is referred to as a Comp-Net [@DBLP:conf/iclr/BalleLS17; @DBLP:conf/iclr/BalleMSHJ18; @NEURIPS2018_53edebc5; @cheng2020learned]. Unlike conventional image compression techniques, Comp-Net is inherently differentiable and performs significantly better than the commonly used JPEG.
|
| 10 |
+
|
| 11 |
+
The advances in deep learning-based image processing and image compression motivate us to propose a fully end-to-end camera ISP network called RAWtoBit network (RBN). Our RBN takes RAW as an input as other ISP-Nets [@schwartz2018deepisp; @liang2021cameranet; @uhm2021image; @xing2021invertible] but outputs a bitstream, which can be decoded to reconstruct a high-quality sRGB image. To this end, we investigate two structures: cascaded and unified. Cascaded structure refers to a simple concatenation of ISP-Net and Comp-Net. However, the performance of Comp-Net can be upper-bounded by ISP-Net, resulting in sub-optimal rate-distortion performance. Unified structure refers to a single network that simultaneously performs the ISP operations and image compression. Although the unified structure can be easily implemented by training a Comp-Net with RAW-sRGB pairs with slight modification in network architecture, such a structure can also lead to sub-optimal rate-distortion performance since Comp-Net is not originally designed to perform complicated ISP operations. Observing that these two naı̈ve approaches suffer from poor rate-distortion performance, we propose RBN to handle both tasks effectively. Furthermore, we present two teacher networks, namely the ISP teacher and the compression teacher, to guide RBN to reach a better rate-distortion trade-off. Experimental results demonstrate that our proposed RBN performs noticeably better than the alternative approaches. Our contribution can be summarized as follows:
|
| 12 |
+
|
| 13 |
+
- This work is the first attempt to integrate camera ISP and image compression in a single learning framework to the best of our knowledge. Unlike previous studies, our RBN takes RAW data as an input and produces a bitstream as an output.
|
| 14 |
+
|
| 15 |
+
- We propose a method that distills the knowledge from two teacher models, namely the ISP teacher and the compression teacher, to make RBN effectively performs both ISP and compression tasks.
|
| 16 |
+
|
| 17 |
+
- Extensive experimental results demonstrate that our RBN with knowledge distillation significantly improves rate-distortion performance over the cascaded or unified structure.
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
The proposed work is the first attempt to integrate ISP-Net and Comp-Net to the best of our knowledge. We thus first present two straightforward configurations of the ISP-Net and Comp-Net integration, namely cascaded structure (Section [3.1](#sec:prop-1){reference-type="ref" reference="sec:prop-1"}) and unified structure (Section [3.2](#sec:prop-2){reference-type="ref" reference="sec:prop-2"}). We then introduce our RBN, which is also based on the unified structure but specially designed and trained with our KD scheme (Section [3.3](#sec:prop-3){reference-type="ref" reference="sec:prop-3"}).
|
| 22 |
+
|
| 23 |
+
A naı̈ve approach to combine ISP and lossy compression is to cascade ISP-Net and Comp-Net, as shown in Fig. [1](#fig:framework){reference-type="ref" reference="fig:framework"}(a). An ISP-Net takes a RAW image ${x_r}\in\mathbb{R}^{{4}\times{H/2}\times{W/2}}$ as an input and produces an sRGB image ${\hat{x}_s}\in\mathbb{R}^{{3}\times{H}\times{W}}$, while Comp-Net takes an sRGB image as an input and generates a bitstream which can reconstruct an sRGB image. Both ISP-Net and Comp-Net are separately trained and cascaded. This configuration is not limited to specific ISP-Net and Comp-Net architectures, and in our study, we use LiteISPNet [@zhang2021learning] and the context+hyperprior model [@NEURIPS2018_53edebc5] for ISP-Net and Comp-Net, respectively. In addition, as in [@uhm2021image], one can try fine-tuning ISP-Net in conjunction with Comp-Net to take lossy compression into consideration. We also investigate the effectiveness of this fine-tuning in Section [4](#sec:exp){reference-type="ref" reference="sec:exp"}.
|
| 24 |
+
|
| 25 |
+
Another way to achieve the same objective is to directly train Comp-Net with RAW-sRGB image pairs, as shown in Fig. [1](#fig:framework){reference-type="ref" reference="fig:framework"}(b). Note that conventional Comp-Nets input and output sRGB images, as shown in Fig. [1](#fig:framework){reference-type="ref" reference="fig:framework"}(a). However, our target network configuration requires the network to take a RAW image ${x_r}\in\mathbb{R}^{{4}\times{H/2}\times{W/2}}$ as an input and produce a bitstream which can reconstruct an sRGB image ${\hat{x}_s}\in\mathbb{R}^{{3}\times{H}\times{W}}$. Consequently, we modify the context+hyperprior model [@NEURIPS2018_53edebc5] to handle a four-channel input and add an additional inverse generalized divisive normalization (IGDN) and a transposed convolutional layer in the decoder to produce the sRGB image with the target size. The network is trained using the rate-distortion loss [@NEURIPS2018_53edebc5] while measuring the difference between ground-truth and decoded sRGB images.
|
| 26 |
+
|
| 27 |
+
Since conventional Comp-Nets, including the context+hyperprior model [@NEURIPS2018_53edebc5], are not designed to handle complicated ISP functions, it is expected that this unified model cannot perform both ISP and compression functionalities properly. More dedicated architecture design and training methodology are required to realize an effective end-to-end camera ISP, which is the motivation of the proposed RBN.
|
| 28 |
+
|
| 29 |
+
<figure id="fig:architecture" data-latex-placement="htb">
|
| 30 |
+
<img src="Figures/Net3.png" />
|
| 31 |
+
<figcaption>Network architecture of RBN, consisting of (de)convolutional layers with the specified number of features and stride, RCAG <span class="citation" data-cites="zhang2021learning"></span>, GDN, and IGDN. The applied context model for rate estimation <span class="citation" data-cites="NEURIPS2018_53edebc5"></span> is omitted for simplicity.</figcaption>
|
| 32 |
+
</figure>
|
| 33 |
+
|
| 34 |
+
We now introduce our proposed RBN, which has a more appropriate architecture for the integration of ISP-Net and Comp-Net with a novel KD scheme, as depicted in Fig. [1](#fig:framework){reference-type="ref" reference="fig:framework"}(c). While the unified structure described in Section [3.2](#sec:prop-2){reference-type="ref" reference="sec:prop-2"} does achieve the main objective of combining ISP-Net and Comp-Net into a single network, the performance is expected to be unsatisfactory in terms of the rate-distortion trade-off. This is because Comp-Net cannot fully handle the transformation of the RAW image into latent representation for compression and necessary ISP operations at the same time. Hence, we design RBN to be capable of performing both tasks with guidance from two teacher networks, namely the ISP teacher and the compression teacher.
|
| 35 |
+
|
| 36 |
+
Fig. [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} illustrates the detailed network architecture of RBN. We design RBN to have a heterogeneous encoder and decoder. The encoder of RBN follows the general structure in image compression with a series of strided convolution and generalized divisive normalization (GDN) [@NEURIPS2018_53edebc5]. Specifically, we modify the encoder architecture of one of the representative Comp-Nets [@NEURIPS2018_53edebc5] that sets the number of channels and kernel size as 192 and 5, respectively, to compress a three-channel sRGB image. Since our RBN takes a RAW image that is packed into four channels, the receptive field can grow uncontrollably quickly if a large kernel size is used. Hence, we use the kernel size of 3 while increasing the number of channels to 256. The last convolutional layer produces latent representation with 192 channels. The decoder architecture is modified from LiteISPNet [@zhang2021learning], which is one of the state-of-the-art ISP-Nets. In particular, we replace inverse wavelet transform with transposed convolution and use two residual channel attention blocks in the residual channel attention group (RCAG). Note that no skip connection exists between the encoder and decoder since the decoded latent vector alone should be capable of reconstructing an sRGB image. To perform entropy coding and entropy model estimation, we leverage the context+hyperprior model [@NEURIPS2018_53edebc5], which estimates the mean and scale of the Gaussian entropy model using the spatially adjacent decoded latent elements and hyperprior.
|
| 37 |
+
|
| 38 |
+
Although RBN can be trained in an end-to-end manner, it may still suffer from sub-optimal rate-distortion performance since joint learning of compression and ISP is challenging. To overcome this issue, on the one hand, we guide the encoder of RBN to focus more on image compression using the compression teacher. As shown in Fig. [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}, the compression teacher network takes a RAW image ${x_r}\in\mathbb{R}^{{4}\times{H/2}\times{W/2}}$ and produces a bitstream that can reconstruct a RAW image ${\hat{x}_r}\in\mathbb{R}^{{4}\times{H/2}\times{W/2}}$. Because the encoder of the compression teacher is trained to find compact representation for efficient compression, we consider that the knowledge from the encoder of the compression teacher can be distilled to the encoder of RBN. On the other hand, we guide the decoder of RBN to focus more on reconstructing the sRGB image from the latent representation by using the ISP teacher. As shown in Fig. [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}, the ISP teacher is designed as an sRGB autoencoder such that its decoder can best perform the sRGB image reconstruction from low-dimensional latent representation. Consequently, we consider that the knowledge from the decoder of the ISP teacher can be distilled to the decoder of RBN. Note that except for the last convolutional layer of the compression teacher network, the encoder and decoder pairs between the two teacher networks and RBN have identical structures to facilitate KD.
|
| 39 |
+
|
| 40 |
+
To perform KD, we adopt the attention transfer [@DBLP:conf/iclr/ZagoruykoK17], where the spatial attention maps evaluated from the intermediate layers of the teacher and student networks are compared. In the original work [@DBLP:conf/iclr/ZagoruykoK17], the attention map is defined as the sum of absolute values along the channel dimension of the output of the intermediate layer. We empirically found it to be ineffective to take absolute values, and thus we define the attention map as the direct sum along the channel dimension: $$\begin{align}
|
| 41 |
+
{M_j} = \sum\limits_{i = 1}^{C_j} {{A_j}\left( {i,:,:} \right)},
|
| 42 |
+
\end{align}$$ where $A_j \in \mathbb{R}^{C_j\times {H_j}\times{W_j}}$ is the output of the $j$-th intermediate layer, $M_j \in \mathbb{R}^{{H_j}\times{W_j}}$ is the attention map of the $j$-th intermediate layer, $C_j$, ${H_j}$, and $W_j$ are the corresponding channel dimension, height, and width, respectively. This modification is necessary because we apply the attention transfer between the outputs of the convolutional layers and not the ReLU activation, thus taking absolute values can lead to the loss of directional information embedded in the output tensor. The attention loss for KD, ${L_{AT}}$, is defined as the mean squared error between the normalized attention maps of the teacher and student networks: $$\begin{align}
|
| 43 |
+
\begin{array}{l}
|
| 44 |
+
{L_{AT}} = \sum\limits_{j = 1}^{{n_p}} {{\alpha _j}L_{AT}^j}, \\
|
| 45 |
+
L_{AT}^j = \frac{1}{{{N_j}}}\left\| {\frac{{M_j^S}}{{{{\left\| {M_j^S} \right\|}_2}}} - \frac{{M_j^T}}{{{{\left\| {M_j^T} \right\|}_2}}}} \right\|_2^2,
|
| 46 |
+
\end{array}
|
| 47 |
+
\end{align}$$ where ${\left\| \cdot \right\|_2}$ measures the L2-norm, ${M_j^S}$(${M_j^T}$) is the $j$-th attention map of the student (teacher) network, $n_p$ is the number of pairs of the attention maps, $N_j$ is the number of elements in the $j$-th attention map, and $\alpha_j$ is the weight for the $j$-th loss term. Inspired by [@passalis2020heterogeneous], we set $\alpha_j$ to make the attention loss relatively higher than the rate-distortion loss during the early training phase, and decay it as the training progresses. In this way, our RBN can initially focus on KD and progressively switch to the main objective of the rate-distortion optimization. To this end, $\alpha_j$ is chosen as: $$\begin{align}
|
| 48 |
+
\label{eq:decay}
|
| 49 |
+
\alpha_j = \alpha_{0}\cdot\gamma^{k^{2}},
|
| 50 |
+
\end{align}$$ where $\alpha_{0}$ is the initial value, and $\gamma$ is the decay factor. $\alpha_{0}$ is set to $10^6$ for KD between the two encoders and $10^5$ for KD between the two decoders, while $\gamma$ is set to $0.99999$ for both cases. In other words, $\alpha_j$ slowly decreases as training epoch $k$ increases. The final loss function is defined as follows: $$\begin{align}
|
| 51 |
+
L_{total} = L_{R} + \lambda{L}_{D} + L_{AT},
|
| 52 |
+
\end{align}$$ where $L_R$ and $L_D$ are the rate and distortion loss terms defined in [@NEURIPS2018_53edebc5], respectively, and the trade-off parameter $\lambda$ determines the rate-distortion trade-off.
|
2208.08519/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-07-07T15:25:44.982Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36" etag="9dFjiZrn2V27r0pcc5XB" version="20.0.4"><diagram id="fJCibNc-zxWZLWeBiRDP" name="Page-1">7V1dd+I4Ev01PO4cS7Jk+TGd9PScs9NneycPM/NosADvAOYYJyHz60fGFmDkhGKDKccy/dBY/sDR1a1SlapKI3a/3H7LovX8exqrxYh68XbEHkaUEhoy/V/R8lq2SCHKhlmWxGWTd2h4TP5W1Z2m9SmJ1aZqK5vyNF3kybreOElXKzXJa21RlqUv9cum6SKuNayjmbIaHifRwm79PYnzefVXcO/Q/otKZnPzy8SrzoyjyV+zLH1aVb83omy6+5Snl5F5VnX9Zh7F6cvRj7KvI3afpWlefltu79Wi6Nt6t/38xtn9e2dqlUNu+P0//05+5P+dPmZ369R/3nzdyt/+tYdhk7+aHlGx7qDqMM3yeTpLV9Hi66H1y+6vVsVzPX10uObXNF3rRqIb/6fy/LVCO3rKU900z5eL6qzaJvkfxe0/8eroz6MzD9vqybuDV3OwyrPXo5uKwz+Pzx1u2x2Z+8q/r/ij3uy4qmmTPmUT9U5vmfEZZTOVv3Md28OraaPSpdLvo+/L1CLKk+f6e0TV+J3trztgqL9UMF4AafXc52jxpMzIFAv9ul/G+sus+PKtHLb6QcuCHdVp/Wv7K6xBUYf8ZZ7k6nEd7XrrRQuGOrzTZLG4TxdptruXxZGS04lu3+RZ+pc6OiMmUo2n+swsi+JE43F0LlCRUAcAn1WWq+37ENpdXt0gKyH1akhcHr4cCE/8igjzI7IzryWMaBgOtAPTjgJpR0mASTx6nniPUa4fpLlzO+5xJWO/iXuSjpnW1Y3cC4PYC4LrcI+ZCUFXyMfO43Sfrp5HVD/eY1vWOkaKaJSCJoxCEbBIXAcH4tWFYMBsHLwGHGhbOPgDDiUfsIGgRAzaCKyNAqA2IpxiaqPgPLu+R9uiD7S11Tq1xp5iSjRRy1PSk7IlanFsahlmuS7k/NDDRgJAiIcsXadPeftsiCYyZk0YUOb7PG4HA86wMdATvUHRwM0eqN3DOKaiMa/ptKYRATq3Bk1TmZs+NhIATeMCEr6HjQTzBjfbBfpGAvVNOalF0zdy0DfU97G5RcNByu30DcVGggFWfFxAgnPsORgDTIV7bmOi2/lscO+XFglBRwLg4O83GwS+ROIDG4rjgEhsJAD2eb/ZEHBsDCjhVj8O9uCbIxa60iVQ/Y9sWOmiVIbY3GIAs9wJTROgIwGwzPutaSS6tjcBTa6zQTJ0JADekZ6zAV0iCYAV8nj3813rAHge99S0UTl75OHLfXFHWouErEL6WwEmFA2hkLcFBmCU/FpcsEqzZevoTKM4iBrjxMe+NqX9llCQ2ChQYxgNZgnE2pDN8IKtjerWH2mif3o/LHzjvDXDwqwzmUeU9lJ11wHxuyyLXo8uWxcXbC74HT88fpx1PfPfvV5/Kd/gMPz2ffKREdk0h7z9EL3isCkDNJwZNtzEu91y2AiAKf64XiTtz7TU7tOkSphgIbvSTIvJE1BIiK3QAVa4AzMtTcOTmRbHBgYiUFfxXZHRqo9W6aoYvXG0me8AIPXOLtp/RHmustWuRZuQ+642WazUEsBQaWp361G38YZeM20flaUer6MmOUiW2g/iZx5UagvrQdeSgwHA+neAhb440UHoLGT+sABwQaYLeG710bzL/2tKdDozJl4VXv6mWDAB8G/c0M6kKAA4oPpmY1v6V+Izf8hxu4D5tNPMPzVuusp86h7zLZ2Pz3wTqYrDfHLE+4MUuIz55IbM50Dmk9KAwVr1D0CBTZtkVZRa2CTLZBFlSf7aPsvkRE0KljVUVVCRGIvG0ICx5D6/Uq0TFtSNH+IZL/8xBUkDBdsrdyI+Yd5fTfUe+HgDCgqo8kWNuwkAa0cawyieRJu+OPysBHNtRmOrN/IJ1Rset8DVG3DVG2ODo6IFVJElJmB55DdV1BRc90ReMnlqd1HsWJSK2GdDsyZR+0prOp3SSaNJFuuZIr9STJY4WTrsBAiALJ0HNblphNyUF/+a0BC7z8hylpeftlDy8FFytETXKRZhBwjjaCKPBYXAhwKSyeO27AqwnX4EWISjx1qeoWPgDzKrvNosVyBC4Wh1oFMoAnwkQEbg4ukGYkmJN8RSEI69K3mjLf2NDwAgRK7HABCOrhdMuJgjCFgJB+hFSTkgJKXHABCGjwB1Ux9bUKDnLe9rBLsOBUVP1aSEWv04LHq8PXAr79z5fTHC5oFwo/1oOMCL2PsCDRQ9CXc/8XNU61P0ogCEA6zwPiOAXgCIcLesb6tYJXrZUMLdMr8tBCQ+AoD6MD1GgKMXRiTCbQcIRy+WT4TbHhCBvhUIEQAPSI8RCNCtbSIAjo8+I4DvehIAy7jHCEh0e4BCdoXqo/NvPw01ZGhIEbkxEo7WjzxFghB8KFAzCj6ZG3aPw3k3rI+6L/j+RZ3yw1r04uj0ogDbo5+Sjtag4OjVkigFGCH9Kg5qgcDxQWCDurlA3YBX/cr9N/DUjYvLfqf0EiE+vRyNwj2FgqHHXVFXd4Y8hcJHD8Bipix495XOVZUHdIvHNxG9lfJwcZNHiyYBNk2oq7s8WsqDo0PBAOa7C1DwAH1KBdlxs99mo8C33SGbbbrAB8GwoWAE1YL/ZCVoqJn8np+ESdyNtiFbqfZczHVA1zhag+EUigBdzFHIZqr95kPQAT4MPpQSCnzPImTLx37zQXZAKA0WegkF/kIWZNPHnvMBXyhB9nvsUUzdaXxDiB7eEKAW2r/EGBxd0agze1EACot+tGT+B/kBcJv0mB+EoRPEdzTh/BQKip6EQH1njesTKNAzcuiwM9NF6gZaI57iJv9TSOGxHi7k1unF0FMOqWNlxywBx9ERMKXnuj8v7sIiiQ+OVPnwftAfJJZb+ewWsUJ0Yvlu5bNboSn4UUKOFfSz4rQIPgKueb3qCHB8Pzykol+fEcD3/EIK+fUYAYG/AAKp9NZjBIIOIOBWjTdr/Qk/LNGxGm8WAvgRIgLg6nB7fw6G73kXsHIP/d2fowOGs3A0dMSCwmzUjegcdDS5w4KC4UPhliltIcDxEXDLlLa0M/pqETPpl11YjN0vT1y0GEtGt1urCKAJHdXWcWhrFYFbHhJLtBF80QYJmXbaNvHxyzQEsHlxf20THz/bWQ4T4hIK9G0cqXS0Xpk1LwvxoXB7ZuwTfATcnkLhlyRhJstmsE0gtokEl4sTuHkJ0q21Q0u5oG+ZTSUkvdlp2yTAX16UsI2c+2ubdKAkg3Q0Bd2CwuyiggiFo1XjLSgoPhRuheFaCPjoCIQAn0mfEUAPQWQmuXawTSC2SVhded42CXBzPEKAB6zPxPLwRVtn0teiOIgaERj7gnt+OwiEEnunJOYBlMsvqjA7vKXuw9ZxkBPVzISx5D4vHjjLojhRNeNQRdoyudYUjNQxIg0k8RsgYu1B1KEU6s5rH+ZBtU819LG0z/5F32PeYzrNl0UaddvWDddj2rO9LtPdp4mOX6jntaWYzG7OiFKxM3oJZQc/n2LvZsyMFB7SqiFCz4SWAabcqHUj9i/qCLGslFL0CAxGaVcQwJly++hTbh+1rPZnm89ReFltjiraaGdCCHCI5bVHLH2YpWl+dO6b7oT59zRWxRX/AA==</diagram></mxfile>
|
2208.08519/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Ground-to-aerial/satellite image matching, also known as cross-view image matching, has shown notable performance in large-scale geolocalization [37,40,15,8,16,25,48,34]. Usually, this global localization task is formulated as image retrieval. For each ground-level query image the system retrieves the most similar geo-tagged aerial/satellite patch in the database and uses the location of the center pixel in that patch as the location of the query. In practice, global localization can also be obtained by other means in outdoor robotics, such as temporal filtering or coarse GPS/GNSS [31,42,41], but can still have errors of tens of meters [42,41,4]. In this work, we therefore follow [31,42,41] by exploiting a coarse location estimate, and zoom
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Fig. 1. Example of visual cross-view metric localization. Given a ground-level image G (left), and a satellite patch S (middle) with its local area, we aim to identify the location X within S where G was taken. Our method estimates a dense probability distribution over the satellite image. The resulting (log) probability heat map is overlayed in red on top of the satellite patch (right). Compared to the regression-based baseline that tends to roughly regress to the midpoint among multiple modes, our method captures the underlying multi-modal distribution. Our final predicted location, argmax(p(X|G, S)), is closer to the ground truth.
|
| 8 |
+
|
| 9 |
+
into fine-grained metric localization within a known satellite image, i.e. to identify which image coordinates in the satellite patch correspond to the location of ground measurement. We adopt the common assumption [16,25,48,27,34] of known orientation, e.g. the center of a ground panorama points north, though we will seek to loosen this restriction in our experiments and roughly estimate the camera's heading too.
|
| 10 |
+
|
| 11 |
+
In vision, even though ground-to-ground metric localization is a well-studied task [1,13,6], so far in the cross-view setting, the only end-to-end approach that considers metric localization is the regression-based approach proposed in [48], which we will refer here to as Cross-View Regression (CVR) for simplicity. CVR tries to solve both the global coarse localization and local metric localization. As a result, its metric localization regressor is built on top of global image descriptors and might miss fine-grained scene information from the satellite image.
|
| 12 |
+
|
| 13 |
+
Rather than formulating visual cross-view metric localization as a regression task, we propose to produce a dense multi-modal distribution to capture localization ambiguities, and avoid regressing to the midpoint between multiple visually similar places, see Figure 1. To capture more spatial information, we compute multiple local satellite image descriptors rather than a single global one, and train these in a locally discriminative manner. We note that dense uncertainty output for localization was shown to be successful with range-sensing modalities, like LiDAR and Radar, for localization within top-down maps [44,3,38]. However, these methods are not directly applicable to monocular vision, as they rely on highly accurate depth information which images lack.
|
| 14 |
+
|
| 15 |
+
Unlike existing literature [37,40,8,16,25,48,27,34], we address local metric localization as a standalone task in visual cross-view matching, and make the following contributions: (i) We propose to predict a dense multi-modal distribution for localization, which can represent localization ambiguity. For this, we propose a new Siamese-like network that exploits multiple local satellite descriptors and uses similarity matching in the fusion bottleneck. It combines the metric learning paradigm from image retrieval with dense probabilistic output via a UNet-style decoder, found previously only in range-based cross-view localization. (ii) We show that the produced distribution correlates with localization quality, a desirable property for outlier detection, temporal filtering, and multi-sensor fusion. Besides, we also achieved significantly lower median localization error than the state-of-the-art. (iii) We show our proposed method is robust against small perturbations on the assumed orientation, and that the model's probabilistic output can even be used to classify a ground image orientation when it is unknown.
|
| 16 |
+
|
| 17 |
+
Our experiments use the recent large-scale VIGOR dataset for standalone cross-view metric localization to test generalization to new locations in both known and unknown areas. We also collect and stitch additional satellite data for data augmentation and metric localization on the Oxford RobotCar dataset, testing generalization to new measurements along the same route across time.1
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
In our work, we assume that a rough prior localization estimate is available, e.g. through GPS/GNSS, odometry, or some other robot-localization techniques [42,41,31].
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
Fig. 2. An overview of the proposed cross-view metric localization architecture (trainable parts in bold). Dashed skip connection is optional, see ablation study. We overlay an exemplar output heat map on top of the input satellite image for intuition.
|
| 26 |
+
|
| 27 |
+
Given a ground-level image G and a top-down L × L satellite image S that represents the local area where G was taken, our metric localization objective is to estimate the 2D image coordinates X ∈ [0, 1]<sup>2</sup> within S that correspond to the ground location of the camera of G. Moreover, we aim for a dense probabilistic output to benefit a downstream sensor fusion task, similar to [3]. Note that in practice, G and S are often provided with their heading pre-aligned [48,16], such that the center vertical line of G points in the up direction of S.
|
| 28 |
+
|
| 29 |
+
Both the baseline CVR [48] and our proposed method adapt a common cross-view image retrieval architecture [25]. This basic backbone is a Siameselike architecture without weight-sharing. Both the ground and satellite input branches consist of a VGG [28] feature extractor. E.g. for the satellite branch, these features form a L ′ × L ′ × 512 volume. On the feature volume 8 Spatial-Aware Feature Aggregation (SAFA) modules [25] are applied, each generating a 512-dimensional vector, which is all concatenated. Each branch thus yields a single global 1 × 1 × 4096-dimensional descriptor. In an image retrieval task, this network would be trained through metric learning such that descriptors of matching (S, G) pairs are close together in the 4906-dimensional space.
|
| 30 |
+
|
| 31 |
+
Importantly, our proposed architecture and CVR make distinct choices on (1) the used descriptor representation for S, (2) how the descriptors are fused, (3) how the output head represents the localization result, and (4) consequently, the losses. We explain these choices for both methods in turn.
|
| 32 |
+
|
| 33 |
+
The CVR method in [48] uses a single architecture for a two-step approach. First global localization is done through image retrieval by comparing descriptor G to descriptors of all known satellite patches. After retrieving satellite patch S, metric localization is performed using the already computed descriptors of both G and S. We employ CVR here for the metric localization task only, and therefore keep its proposed architecture, but will not train it for image retrieval. Focusing on metric localization only, our CVR baseline makes the following design choices:
|
| 34 |
+
|
| 35 |
+
Feature descriptors: CVR follows the image-retrieval concept of encoding the satellite and ground image each into a single image-global 4096-dimensional descriptor. Both descriptors are fed as-is to the fusion step. Fusion: CVR simply concatenates the two feature descriptors into a single 8192-dimensional vector. Output head: A multi-layer perceptron is used on the fused descriptors which outputs the relative 2D offset ∆X between G's true location within S and the center X<sup>S</sup> = (0.5, 0.5) of the satellite patch, s.t. X = X<sup>S</sup> + ∆X. Loss: The standard L2 regression loss is used on the predicted offset and true offset.
|
| 36 |
+
|
| 37 |
+
We note that most of these choices follow from the need to use a single global descriptor for a whole satellite patch, as such descriptors are necessary for image retrieval. Our argument is however that if a localization prior is already available and global image retrieval is not necessary, this state-of-the-art architecture is sub-optimal for metric localization only compared to our proposed approach.
|
| 38 |
+
|
| 39 |
+
Our proposed architecture starts with a mostly similar Siamese-like backbone. The method overview is shown in Figure 2. It differs from CVR as follows:
|
| 40 |
+
|
| 41 |
+
Feature descriptors: Instead of building one image-global descriptor to represent S, we increase the top-down spatial resolution by splitting the satellite L ′ × L ′ × 512 feature volume along spatial directions into N × N sub-volumes, where N is a hyper-parameter. Now the 8 SAFA [25] modules are applied to each L ′/N ×L ′/N ×512 sub-volume in parallel, resulting in an N ×N ×4096 descriptor g(S) for the satellite branch, shown as the green vectors in Figure 2. Let g(S) ij denote the i-th row j-th column of the satellite descriptor, 1 ≤ i, j < N. The ground image is still encoded as a single global 4096-dimensional descriptor f(G), shown as the blue vector in Figure 2.
|
| 42 |
+
|
| 43 |
+
Fusion: To help distinguish different satellite image sub-regions, we compute the cosine similarity between f(G) and each g(S) ij , and use this similarity as a feature itself at this fusion bottleneck. This similarity computation results in a N × N × 1 matching score map M, thus Mij = sim(f(G), g(S) ij ). To complete our fusion step, the M is concatenated to the satellite descriptors g(S) through a skip connection, shown as the upper yellow solid arrow in Figure 2. Optionally, one could also concatenate f(G) again into the fused descriptor (yellow dashed arrow), similar to CVR; we explore this in our experiments.
|
| 44 |
+
|
| 45 |
+
Output head: Rather than treating metric localization as a regression task, we seek to generate a dense distribution over the image coordinates. Such output enables us to represent localization ambiguities and estimate the (un)certainty of our prediction. Towards this, we feed the fusion volume to a decoder which can progressively up-sample the N × N matching map to higher resolutions. Akin to the UNet architecture [22], skip connections between satellite encoder and decoder are used to pass the fine-grained scene layout information to guide the decoding. Finally, a softmax activation function is applied on the last layer, and outputs a L × L × 1 heat map H, where each pixel Hu,v = p(X ∈ c(u, v)|G, S) represents the probability of G being located within pixel area c(u, v). This heat map is useful by itself, e.g. in a sensor fusion framework. For a single frame estimate, we simply output the center image coordinates $\overline{c}[\cdot]$ of the most probable pixel, i.e. $X = \overline{c}[\operatorname{argmax}_{(u,v)}H^{u,v}]$ .
|
| 46 |
+
|
| 47 |
+
**Losses:** A benefit of our framework is that we can add losses on both the final output and the fusion bottleneck. The full loss $\mathcal{L} = \mathcal{L}_{out} + \beta \times \mathcal{L}_{sim}$ is thus a weighted sum of the output loss, $\mathcal{L}_{out}$ , and the bottleneck loss, $\mathcal{L}_{sim}$ , where $\beta$ is a hyper-parameter. We discuss each term next.
|
| 48 |
+
|
| 49 |
+
Since the output H is a discrete probability distribution that sums to one, we treat our task as a multi-class classification problem. $\mathcal{L}_{\text{out}}$ is simply a cross-entropy loss over the $L \times L$ output cells. The ground truth is one-hot encoded as a heat map with the same $L \times L$ resolution and label 1 at the true location and 0 elsewhere, In practice, we will apply Gaussian label smoothing to the one-hot encoding of the output head, and tune the smoothing $\sigma$ as part of the hyperparameter optimization.
|
| 50 |
+
|
| 51 |
+
To guide the model to already learn locally discriminative satellite descriptors at the fusion bottleneck, we apply the infoNCE loss [20] from contrastive representation learning [11], which can be seen as a generalized version of triplet loss [23] used in image retrieval in the case of multiple negative samples are presented at the same time,
|
| 52 |
+
|
| 53 |
+
$$\mathcal{L}'(ij^{+}) = -\log \frac{\exp(\sin(f(G), g(S)^{ij^{+}})/\tau)}{\sum_{i,j} \exp(\sin(f(G), g(S)^{ij})/\tau)}.$$
|
| 54 |
+
(1)
|
| 55 |
+
|
| 56 |
+
Here $\tau$ is a hyper-parameter introduced by [20], and its role is similar to the margin between positive and negative samples in triplet loss, and $(ij^+)$ is the cell index of the positive satellite descriptor w.r.t. the ground descriptor.
|
| 57 |
+
|
| 58 |
+
We reuse the smoothed one-hot encoding from the output loss to allow multiple soft positives if the true location is near a cell border. We max-pool the $L \times L$ target map to the $N \times N$ resolution and renormalize it to generate 'positiveness' weights $w_{ij}^+$ for each cell $1 \le i, j \le N$ . Our bottleneck loss is simply a weighted version of Equation (1), $\mathcal{L}_{\text{sim}} = \sum_{i,j} w_{ij}^+ \mathcal{L}'(ij)$ .
|
2210.14986/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-09-27T10:27:06.239Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" etag="1_1Bqdg7xjS01BuR2p8k" version="20.3.3" type="device"><diagram id="wzi7uEOJ2j0V7PFNrb5z" name="Pagina-1">7VtZd6JKEP41OfeJHGj2R/cYl7gvefE0TbMo0Mii6K+/oJi44JiZccncO8kJkeqmaaq+r7qqaJ/Ygh1VPOgaDaJi6wnQavTEFp8AkCU2PiaC1VbA82Ar0D1T3YqYT0HXXONUSKfS0FSxf9AxIMQKTPdQiIjjYBQcyKDnkeVhN41Yh3d1oY5PBF0ErVPp0FQDYyuVgPgpf8GmbuzuzAjytkWBaKZ7JHTS+z0Btrz52TbbcDdWegffgCpZbkWbZ2dLT2zBIyTYfrKjArYS1e7UtlVQ+Uzrx7w97ARfuQAy/W59+oJRp1opDkYDa7JWKY7bDrOAVoh3zyFY8YB51Vwksw5WqaaEeZhMNX/wSU//b65QvGNJPJvNMDspOBgRbLSHk/kxcfPSMAPcdSFKWpcx2GKZEdhW2qxC3/joCy1Td+LPKH547MWCD+3SyUngkdmHOZP+GnGCMrRNK0Fpz7RjxAG6iZfxsUNs6HzMboG9AEdnVcx8GC7mAyY2DrxV3CW9gJdSUKdcYLj0fPmJLIZNZcYeqnj6mU8hncJZ/xj806Txh9SqP2FhMcPAVzPDt9A6c6h0wJ7qXMzQOdgJr65y5iynEoVkkippoPyNc8zFHRjOjTKp1g9iuEMnts0nw7aDXqQYfdm2mmlZBWIRb3Mtq0IsaejDsnstApKwop3YnN2HxBcBkHRJVwWGuw4gAH8ACC4DEFwGINhb4QHcDg9DuBkgIJtR4GqzAnrJQ8R/TvJsKDFy+YFwcYiDvwsUpEdDgb0dFDrYd4nj38UzaJoGUKZnUAVF4IU/xDMwWWvFXfFwPvz6bTxU/1ngJL4mQZA4Ax0nw4VufMDQs1bPD8TJd3IJjPBoDPA3xIDtWiaCQejdJ2DgsaRyWW5BAgor/CluAdCPhoRwO0g0yV2YfwYK34n5jw8M5Qwz/7dyM0AfJmcZARiQMlTO8OKNdL7LFm/BraOB0AcdPjuynJr87ot+44YlPzDwJtz3Z7HhkwrQXpdskl+eIk0LQrxqX2eKv52h3FmlB8NAJ1HqaxjTAdDeJsBWr6dnRZJVDl5p3hfCvW+p3tidISP+b2Po+Fut/uokRUZGvHy9SW4Lmr81ikP2nkfJKI0+1BDno4AbBAeHyw84FxZ8h1CQOUoPMmJBwPOnKxZ3swXrYjnxi2C575LxQO//DRz5/X3yZfcqQDuhpaP47uXHzXZ9vxbyII3HfIZ6VgmlLzmhE8f5K2q5TcrzJ3m1oxdTGUUPIGS9mLqZVztNdbCq4256SrzAIDpxoFX6lOYPrfPZp06Im+p3ioNglWoPhrHfObAYjsxglFz+zKdn472WYpSOvDlZpSdfTpSS+f/YPvHjktBD+HJKGEAv8RYX+p3a28MWDMzF4TyuX8W+WJ+4ffxSrzd+ntRfyKF/pqJ9zNSz1c+f9ABXYDwnHzGelnavmPc4n5V6s/KNUMP9Rc0fhxrh4aiRbleuKUVxRGTdpjKeXfY8tHS6XegcRu4fJojHPuPE9Pxd35tn7V25ku1bHrHdTWqCYwzAAH85Pn4QKsDjgkcZPB8mxWJG4ZzOquLeKnrkbvjOrJQMGgdRxPmPQMCCCrbyH7sVs+5zBZQIx4UTIJyAJCvDALfCiHT+3Xp8gyQfRwb0/DTK3qEgDLRkyflRRTC9666LSpBPmck2xDgToZLtrJSiyqqg0jIlxos5pciKStFI4ShehDSDVCB+MaBZpnrKOcSzofXDBNy09WQgDx0MYwRBso82l6gSlC1DeNYJ0S0c+thD8S1i4z4jYsdthWqkFuCcZVaVaFQ1PHYQLTiWbnjr8gTMuj0S9HoiMpHQrk6hJ8sUo8qzXlQdVvz1fN5qm7XXlyFLc66OGVVzBjXOIdPCRBpOeI0X3mEXvJQYuuTWe3RtytFrDrw2VSmq46jmB7kS5iMFmetILDOL2VgsFOF41puumuOWZJVCueevYUuttUKfjLUGbr+1q2Ds4GW5vTBt29UPtLkHr51UPCH7Hi53QvlCOfiwGnHkBGLmBJfZfxL7fY3Oe5zNS3KRy/3AZ1yBy4x8XATNCgPBPWMB6XwceGs2s7IEpJi6WzazEHMUEHiWYjUJ8jQvy3JS+30Em/mzbA57466nDJSRNJ9b4yajEnruKKJUGBbnszY0iqJQnGo0GMF5QQr1gVnvFdqMOMwN+wpYj9lqrYleg3Z+FuTfECu7HoRS3V0DQ129ax3HeG1EEnFEqiLPwJsqV8tRvUl13GFFoGpuqU3K8uv63aiWxyHKt4vt5XrezI/9xdhvQQ4qjeJInQylgiXLfa300igNrIUU0O3Fghm1B9RFNjPMXzr/Dp0zs7r70jlr68N96KwijCQZ81s6qwAxFOaQRGFeVTHHyYqifOk94vXpzJ6lc90YvDdQDc1GxfJMF6xClYdms9br6GFN6KGWMeaqQmFkj5d4vKCp9quOKyuImj19yr/z8yUSmiW2Q4x8T3pT1EnJy9nF/PTltR+8S2GtV1kuSkaZbncYYNC9tTR9Z7t5oRUHy06j3e6PLU96E93mtDs1xH7BDW1NKeX0patKpbDs1pnVG4p9Q3Fq2xOgzN3efJbLowbmmM6aMfGV6Cz8UXS+Am35o70zDJOVdmVE1DfbrnR+68xf0v4l7V/SZpE2s1ZyT9Ke/8rBrUnL0JqK4ugjTYM1RaAQw8UHHrAKBIrMIu67kdaR+AAXcho/UsSwQ0f9RemtaxTAJE87+uR1wgRsZ8F4Wp9nKnYIehVrjrhmVCOiMrOAEvqwyDfzRmdkKHyxKdYLwF1NuFa+1Of61MKfFEfNemMyYJrlVqnVCxf+dDhdqvLS0BYvISis7JVQjCqspU4EWKPcdzbHL2tvtRGHi++V0TyqNemW2GyCQUsYm5Vaz/a6NVCfSdPuUljmLpL2lI9ZgTP9f2OtcMxa/rT0zWQFyNzP0zY+/fzS76Zt74vVbOlf</diagram></mxfile>
|
2210.14986/main_diagram/main_diagram.pdf
ADDED
|
Binary file (79.9 kB). View file
|
|
|
2210.14986/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
> User: "Have you seen my phone?"
|
| 4 |
+
>
|
| 5 |
+
> InstructGPT: "Yes, I have seen your phone."
|
| 6 |
+
|
| 7 |
+
InstructGPT's response[^1] is a perfectly fine answer to the question, but a human might answer differently. They might respond "it's in your bag,\" bypassing the obvious follow-up question ("where is it?"). Giving such a helpful and efficient answer is an example of pragmatic language usage that goes beyond the semantic meaning of utterances. Meaning is not only determined by a combination of words, but also context, beliefs, and social institutions [@grice:1975; @huang:2017]. Consider another exchange where Esther asks her friend Juan "Can you come to my party on Friday?" and Juan responds "I have to work.". We resolve Juan's response into a decline by using the contextual commonsense knowledge that having to work on a Friday night precludes attendance. Both these exchanges contain an *implicature*---utterances that convey something other than their literal meaning[^2]. Implicatures illustrate how context contributes to meaning; distinguishing writing and speaking from communicating [@green:1996]. We cannot fully understand utterances without understanding their implications, nor can a computational model. Indeed, the term "communication" presupposes the speaker's implications are understood by the addressee. Being able to resolve seemingly completely novel implicatures and---more broadly---engage in pragmatic understanding constitutes an essential and ubiquitous aspect of our every day usage of language.
|
| 8 |
+
|
| 9 |
+
<figure id="fig:opener" data-latex-placement="ht">
|
| 10 |
+
<img src="images/drawing_implicature_paper.png" />
|
| 11 |
+
<figcaption>A schematic depiction of the protocol we propose to evaluate whether language models can interpret language in context. Each example in the test set gets wrapped in templates and transformed into an <em>incoherent</em> example by swapping “yes” and “no”. The model is said to understand the implicature if it assigns a higher likelihood to the coherent text than the incoherent text.</figcaption>
|
| 12 |
+
</figure>
|
| 13 |
+
|
| 14 |
+
Large language models (LLMs) have demonstrated remarkable ability on a variety of downstream tasks such as planning [@huang:2022], commonsense reasoning [@kojima:2022], information retrieval [@lewis2020retrieval; @kim2022ask] and code completion [@austin2021program; @biderman2022fooling], to name just a few. When finetuned with human feedback, LLMs obtain higher ratings on desiderata like helpfulness [@ouyang2022InstructGPT; @bai2022training], and are proposed as conversational agents [@thoppilan:et:al:2022]. Despite the widespread use and deployment of LLMs as conversational agents, there has been limited evaluation of their ability to navigate contextual commonsense knowledge.
|
| 15 |
+
|
| 16 |
+
This raises an important question: *to what extent do large language models understand conversational implicature?* To answer this question we use a publicly available dataset of conversational implicatures and propose an evaluation protocol on top of it (Figure [1](#fig:opener){reference-type="ref" reference="fig:opener"}). We evaluate a range of state-of-the-art models that can be categorised into four distinct groups; base LLMs (like OPT [@zhang:etal:2022]), instructable LLMs finetuned on downstream tasks (like Flan-T5 [@flan]), LLMs finetuned on conversational data (like BlenderBot [@ng-etal-2019-facebook]), and instructable LLMs finetuned with an unknown method (i.e. the latest versions of OpenAI's InstructGPT-3 series[^3]). We evaluate both zero-shot and test whether performance improves by presenting in-context examples (few-shot evaluation). Our results suggest that implicature resolution is a very challenging task for LLMs. Most models obtain around 60% accuracy on the test set, whereas humans obtain 86% and random performance is 50%. InstructGPT-3 consistently outperforms other models across almost all model sizes considered, but even here zero-shot evaluation leaves a gap of 14% with the average human. In-context prompting can shrink this gap to 6% for the best of OpenAI's models. However, it does not help much for other models; at 30-shot they still all perform worse than instructGPT-3 does at zero-shot. We do a comprehensive error analysis by manually grouping the test examples into categories and uncover that the performance increase for the largest models seems driven by the simplest examples in the dataset that require no context to be resolved. For these examples the conventional meaning of the words entails a proposition, e.g. "some people came to the party" implying "not all people came". When isolating the best model's performance on implicatures that do require commonsense knowledge to be resolved (like the one in Figure [1](#fig:opener){reference-type="ref" reference="fig:opener"}), the gap between zero-shot and the human average becomes 24%, and the gap between few-shot and the human average becomes 9%. Furthermore, scaling analysis shows that most of the model classes we evaluate do not exhibit increased performance when scaled up. Based on this result, we hypothesise it is unlikely further scaling alone will lead to significant improvements.
|
| 17 |
+
|
| 18 |
+
The main contributions of this work are as follows i) we motivate implicature understanding as a crucial aspect of communication that is currently missing from evaluations of LLMs, ii) we design an implicature resolution task and propose a comprehensive evaluation protocol on which we evaluate both humans and LLMs to find that it poses a significant challenge for state-of-the-art LLMs, and (iii) we perform a comprehensive error analysis and identify opportunities for future work.
|
| 19 |
+
|
| 20 |
+
# Method
|
| 21 |
+
|
| 22 |
+
In this section we outline the full evaluation protocol we use to answer the research question "To what extent do large language models understand conversational implicature?". We focus on simple binary implicatures that require inferring "yes" or "no" (like the one in Figure [1](#fig:opener){reference-type="ref" reference="fig:opener"}). As a proxy for "understanding", we say a model *understands* an utterance if it assigns higher likelihood to a coherent utterance than a similar but incoherent one, detailed below.
|
| 23 |
+
|
| 24 |
+
**Zero-shot evaluation**. Consider the example from the introduction packed into a single utterance:
|
| 25 |
+
|
| 26 |
+
::: displayquote
|
| 27 |
+
Esther asked "Can you come to my party on Friday?" and Juan responded "I have to work", which means no.
|
| 28 |
+
:::
|
| 29 |
+
|
| 30 |
+
We can transform this example to be *incoherent* (in the sense that it will become pragmatically inconsistent with expected use) by replacing the word "no" with "yes":
|
| 31 |
+
|
| 32 |
+
::: displayquote
|
| 33 |
+
Esther asked "Can you come to my party on Friday?" and Juan responded "I have to work", which means yes.
|
| 34 |
+
:::
|
| 35 |
+
|
| 36 |
+
If the model understands the implicature, it should assign higher likelihood to the first of the two sentences above, namely the most coherent one. Importantly, both sentences have exactly the same words except for the binary implicature "yes" or "no", making the assigned likelihood scores directly comparable. Formally, let the coherent prompt be $\mathbf{x}$ and the augmented, incoherent prompt be $\mathbf{\hat{x}}$. A model outputs a likelihood $p$ parameterized by weights $\theta$. We say a model pragmatically *understands* an example $\mathbf{x}$ when it assigns $p_{\theta}\left(\mathbf{x}\right) > p_{\theta}\left(\mathbf{\hat{x}}\right)$. This is equivalent to evaluating whether the model assigns a higher likelihood to the correct continuation of the two options. Note that this is a more lenient evaluation protocol than sometimes used for language models, where models are evaluated on on their ability to generate the correct continuation, in this case "no". However, "no" is not the only coherent continuation here, and marginalising over all possible correct continuations is intractable. The more lenient evaluation does capture implicature understanding, because the choice of "no" versus "yes" is only determined by the resolution of the implicature.
|
| 37 |
+
|
| 38 |
+
We use a dataset of conversational implicatures curated by [@George:Mamidi:2020]. This dataset contains conversational implicatures that, like in Figure [1](#fig:opener){reference-type="ref" reference="fig:opener"}, are presented in utterance-response-implicature tuples. Of these data, 718 are binary implicatures that we can convert into an incoherent sentence. We randomly sample 600 examples for the test set. We keep the remaining 118 examples as a development set to improve language model implicature understanding after pretraining through in-context prompting or finetuning.
|
| 39 |
+
|
| 40 |
+
**Few-shot in-context evaluation**. We add $k$ examples of the task to the prompt, e.g. with $k=2$:
|
| 41 |
+
|
| 42 |
+
::: displayquote
|
| 43 |
+
The following examples are coherent sentences:\
|
| 44 |
+
Esther asked "Have you found him yet?" and Juan responded "They're still looking", which means no.\
|
| 45 |
+
Esther asked "Are you having fun?" and Juan responded "Is the pope Catholic?", which means yes.\
|
| 46 |
+
Finish the following sentence:\
|
| 47 |
+
Esther asked "Can you come to my party on Friday?" and Juan responded "I have to work", which means no.
|
| 48 |
+
:::
|
| 49 |
+
|
| 50 |
+
We evaluate the models' $k$-shot capabilities for $k \in \{1, 5, 10, 15, 30\}$ by randomly sampling $k$ examples from the development set for each test example. We opt for a random sampling approach in place of the predominant approach in prior work which leverages the same ordered set of $k$ prompts for each test example. This change in protocol allows us to control for two sources of randomness. Firstly, examples have different levels of informativeness. Secondly, recent work has found that the order in which these examples are presented matters [@Lu:et:al:2022]. Ideally, to marginalise over these random factors, we would evaluate each test example with all permutations of $k$ examples from the development set. This requires $\frac{118!}{(118 - k)!}$ evaluations for each test example, which is intractable. Instead, we estimate performance per test example by randomly sampling from the development set. In this way we control for some of the variance in performance, but avoid extra evaluations.
|
| 51 |
+
|
| 52 |
+
**Controlling for prompt sensitivity**. It has been shown language models are sensitive to the wording of the prompt [@efrat:levy:2020; @tan2021msp; @reynolds2021prompt; @webson2021prompt]. To control for this factor of randomness we manually curate six different template prompts and measure performance across these different wordings. One of the templates has already been presented in the examples in this section, namely "Esther asked \<*utterance*\> and Juan responded \<*response*\>, which means \<*implicature*\>". Another prompt template is: "Question: \<*utterance*\>, response: \<*response*\>, meaning: \<*implicature*\>". The former we call *natural* prompts and the latter *structured* prompts. Each group has three templates that only differ slightly in wording. This grouping allows us to look at the variance due to slight changes in wording as well as performance difference due to a completely different way of presenting the example. The full list of prompts can be found in Table [9](#tab:prompt-templates){reference-type="ref" reference="tab:prompt-templates"}. As [@perez:kiela:cho:2021] point out, for the few-shot evaluation to be truly few-shot, we formulate these prompt templates before any evaluation is done and never use more than $k$ examples from the development set for a test example.
|
2210.16541/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2022-06-16T09:43:22.444Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/19.0.3 Chrome/102.0.5005.63 Electron/19.0.3 Safari/537.36" etag="Qe2GTbQs3NAaUbDlrA4c" version="19.0.3" type="device"><diagram id="5lVjMyii6_Txz9HAa2Yf" name="第 1 页">7V1rc+O2Ff01O9N8WA8B8KWPa+86aaaZpnGnTT5lKImy1ciiK8nZdX99qQdtCYAsyOLFPYLZzHRtSqLtg4uLcx+854O6uv/2/ax4uPupGpaTDzIafvugPn+QUsYir/9ZXnlaXxFxFq+v3M7Gw821lws34/+Vm4vR5urjeFjOd964qKrJYvywe3FQTaflYLFzrZjNqq+7bxtVk92f+lDcbn5i9HLhZlBMSuNt/x4PF3frq4mSL9d/KMe3d5ufnKeb29wXzXs3d5jfFcPq69aPUl8+qKtZVS3WX91/uyonS/AaWNY3ut7z6vPvNSunC5cP3N3/47dM3Tz9Un4Vv4/SL/Mf5dNHla1v82cxedz8wZvfdvHUIDCrHqfDcnmX6IO6/Ho3XpQ3D8Vg+erXetHra3eL+0n9nai/HFXTxXVxP54s1/v7YlbcV9Ph5vpmcUWv/n6+mFV/lFfVpJqtfooSn5b/Ld85nky2rl9ny//q67ezYjiu/9bmtWk1re92ufn9y9mi/LYXGfGMd22oZXVfLmZP9Vs2H4jV+hMbE80aS/j6st6ipzZvuttabKE27yw2Rnb7fO+Xhai/2KyFfV0uHwc/Tvt/HT59/nPx66e7n3+47suPwrIs6WSxAXJnfdL/PlbNCx/nK4g/1W+Q6uHbCp7m9fqr2+W/fym+++v04XF5k6tqWq/D42Axrqb1t/0lJtfjyaKcjae39fuL++XqTvvzh/XnJ+v7XK9/h/XtXjEXcdhchsX87vm9O7ZzVd2PB/Wdb4rpvP7npxvNiPJnI2q2pbhINPPZmMiurW0uVvXvNF48bax6j8lYDGuvFTVWs7GiNJWGFak8M42IzIbUMVsbZa1aWIgsy3ZWIrFs59yym5NcEq1E3LKTbX8l2rD/XdTjLKrvrAMv48jmRhMi4JN3CHx9fvEDn75D4Bvnzod6TunxWwAsVbtnZGZ65sQGWE51RPYOA1bT9odydT4tysOI9YvBH7crjP/+uJiMp+UzkrM//v7COC6ipQmuydpn2RIBiXfNUUSmF8ht6EZkJDZq2QtoocGwKPPRwBZMpIO87I/agTXRYBUmrNZdToaqLTY4e1SViWrqFVUZIqqJiarVwZKhelQMci6oZty26hBPnM+xJVK4c6vtsEG32qTMh7HNanPZV2naEq4Z2sHVdlCAAavl5FJeYW09hQsBq+Xo8utkjwqmzgZWy9nllxEEFXIpiXZ2SeqYy4vZKgXmZCV10MUDK7c3aFA87/jAgJWbackgwi4DVu5oVgYVd2npQtkzk7Gejy7qsIsjWaAiE1a/Jxd11MWCqqVFw2t0IKmDLhZUYxNVv+cWdczFgmpqour32Aoq5NLThfznlgoi5NIzBbaDy6szUEGEXAas3CdX07gaGKyWo8svrNQhFw+s3GeXCirk0tOFAGdXEKUuPVPA72SDCLoMWLljWZenFc7HG2gJmNjSP+jZGYQYfsU9S1umV6N1CL/ODtVEmKh6jQ5i6qCLBVXFjWqIvYVJYqLqlQ3EDiHX+RxbegKG/9yKg4y92A+uuO3HliBgtZ1cfr1BECGXASv70RVkd6Ht7PKagImDCrn0BAzA2RVEm6GeKeA3W+qgiyevZclye4U1CSLqMmC1ZLm9UoIkiLBLh9VGYL1SgsQh7jo/WNlTL0kQ4ZYBq4XAeg23Elu4tTPzZbQG8GWki/ns+9a0l8MjYkS+Z0TMZY1i87P7s+bql+mgGpYzkjkwO4/0/6ucDYtp0VaWfXe6j+UIzXvmMqdtMD7ryKWmi3ULqnJ4W95svq1mi7vqtpoWky8vVy9399PLe/5WVQ8bCP9TLhZPm6kHxeOi2gW4/DZe/Lr19W8rjp1svvv8bXPn1TfNUJwa9NnTr9vfbH1q+e3Lx1bfNZ9znQG1xmH5x++srWW559XjbGP7e+ZNNKPBitltuXgN/dhuLbNyUizGf+7+Ju2PxXA46OvbjB/mDoFUMX9YDzwbjb8tTcP0l72eUtfX7eyjNI6bsSLNvIue6TCFuZPIwMwcjvdjIWQ5inIjYW0iazvhY7LoKfIArRfy1GvceNNx4djiSgitA3s6S2hjS9OFZ2iPyFWNJuW3T8vxkKvTa7j58vNgUszn48Eu3ruH717otIPMBO5A1N5ccz6KNj/h52q8oocN4RG73uT5sZnmFuvDdPOpF8zNG+kP5us3Wh+2xo1Wi/f8Z5+yng65sfe3nkrfQG9dT+NG5Ot5RCP6+1nPJBYXTYv+qUtquxf1qrpMeQp/VZNDztJ1SdND7pt8PR1ilPe3nm/2uvp6eve6LpOt3t96nuJ19SXl8Loucx7CX1XZDJN/dpbxRaR6L/9L37bCSh53X/LV7nyyZbWV6pGs9qH7kq9257Etq53E0YVS0fP/4nYW+8BtydfaIR0U/lr39OfOVHYR9V4WpUmaHbvYIpLH3Zh6uV2eFd5a2s1Q/L3reuIkbHkGpqEtYJqKiy3P3MvfZhjPuUK325KbhUtdqDOL7QXUyrot2UUGZhcOTMCvXbzUhC96udiuC3+MatKUHSgNr777uZyNa2jK2aHTabuauwegGMoqs6Z59MV83nhwxWkKZYg2mtKWIFH/uy/TxbLtV0arHpO1+FAnNnREV4tNbcjavtR4FIKHcA72L51gIoPvflntzJVO1U/FYjb+ttqtD7NyXqPYvFJMh/X/X82q+fzjw1KAbWlRG9PausGnxaL+0OrrzsxeM7MsMbyQYWa5tcjbgpnZ+3dS8wDsuqe2Fvct3VNx85QHePtU7PI0Sov9U1f1/6KWtpLZPyWanLGHBirrYjYbt9tLre6l3HEvrQcD8O0lh+6Kt+8lmh2Tcu8YC0Tdjjl1xzQPFODsGPviK87Ff1nw37ZeCXPxX2tKZfOXLo+9gPlLGfVsUpA+m7Tj9KgpAgCihDLLDcisD1mSxTgx6xMiu44mCsbRpAmco7GHWL2OYzCuvliXBNmOmdRhPMjZPCIkbTNsfBL2tAtxKTZTCreZ7KsvOFc/VMbuvPrpPn9xXJXt02xWPG294WFZ5Zpv3Vlvz0x2nZDayEy/WNL6ju1WyFKH+UNnk5pUyuTAVH5bFP/6rXj8MZ8MZXx9W3697F/eP0twdjv3lI1qyeq//lDiwTK6J7/NmmYLNgByNYf35bdJS0o03rnpMmFL6vQc+tWWM94e3P9+GQ2q6bRGr+g3d4hexUUkWnFaRtFFHG31yMQGSrY8jsx6FyJ76QglA81FM4qi1ff1JeTunDq2M6r5ed4atJkeQsZetaYgF4sLJbe23Bv7tJPdFOyh25KvOdODymex5gf85RuX/MBdqVe8Jy0rflzzmujtaV77sHziJL77/UN2+fC7+JB9Xl/oes9e5z0iynef1EndVHXbKMvsMZLTm2APGMmiM5JjjUTtkkDhOjuJykqS3MaNO1eCZSWC3Uo6XwJnJUJiHThJcwDSGUl/y0iSL6IzFDdDSdEMxVY0JjQU2RmKk6HICM1QbFWqzqOwG4pBUHrsBMWWrexcCpylSMVuKbYMKUXAIzsq63zwxGgHjy2lShHvdEbibiRgCbakkcMmP3Nkx06OMRR9ZhK/ofjKsanOmzgbCVpQTF/TWXRGcqSRxGgBMX1Np79lJN2R42woaAS26SbyZShdPOxoKHAkljzF1h07pyZNYvakSY88vdYx2FOtRLEnYXvkqbWOnbTiTywd2V61j5IeeX6toydtWEriqJLVHFDtSxBG5Em2Ll1/qpXIjPnkSSPyDFuXim2Fo3D3O6ZWrciusoNlJdz9jqm0+RNtuU6UvRyNRnJglb0cpv00IRJrVYk/6md9VNrh+SSo+UpxIg3ArPOV8hYQsz9eDDTI7RwfL7aa4cbBuD5rbpqLp0fLO/V1L2tvBz/2tPSv/ZZBDFaKM2E4Ub+Dlbo53O3vo0ZKFXwjuSi+ns2om0T420l2NONuJx0x6saOoXBkH5m0ble2neQQFYENH0kS5pMnYWXvePulnZNHHrWB2PaLLdcEvl96wuMMZitqnVbwOp+Q60Pukt2RE28Vmeyp427c3jALq4N0mVizNZzJeRKTZXH37gNNuzVTtkHkoqly7OyCFgpfdvEACyotyVP9UBbDDy9SU8Enpy0ms20ar1jlXoNplr3ZQjZzsU7vkkTm0tBynnRBmwhvHzs+N5zLFCVqPySbXO8BRyQ94iJOf1h7ryf6ZzGedJ7owD7pvW4ycsdibOzNsydinSzcJsJcnshFdZvaE8UaSdzjiXKfuAhSCTDDssSn5X8nWZbaMxOUy7KE7YxryZdfzsbLyKnz5m+yiM1tFBqvdJH0pvZFPU2mOmu0e7dQUc1M8LabGOySR4dB8dUd8iYjlAfdElVPiD1HSueVtmSUv44Xd+PpCp/F3cfljyjul6sy7c8f1p/oHJa7rTQ8Id/Zmmbflkxin+7KQfWO2l2JVGtns/ur3OMGcymBnuiwhkWZj6wOKx3kZX90mhHuacLhcljWQlj7HqsYLKXhNx5r3jmsk2ylYVi7e9PisKzZFTKH5UAlyB2WivT8t8VhNXG9lw3mEOmd6q+SMh/GNn+Vy75KTyRYexpeuPwVYdi3l2Cpzl+dZCt2f2URKMyJ4kFrQd0hHMRbvz0O8ahnFFKx20gvLH3htpVoZIEIBry7ZFpP9JLXUS/KezYv2c+TOGnrAZBUyzaKzHEsAh22Duo3p2J7fZXmiUOI3y62zwc5H7b0p3ueXl73hMPp3i62cWL6Z8/YtvBgdW4c9dGoGOx+xvTOBjO4WjXhjMbljOTkb2P9Mq3eankuvmeJ05Om2k6guuUQD7yD5qcs2k2Ei2YAydHdTlly4E7k6kwOCtnvcEWVyndFs1pa3wP3JV9tB072Dlc7ztI2VLiM1T5wX+rVVg57e2tlN+HI3mU9Mf6R52AZz0OCmgFoqbjY2q3Nq8dahoyaQSHPg9U873x1+iC1vUmWvwy/a9jUYJNtCT6Xchqze54w9GxmZtSjYlvvRQuZE/vDAA79hSdGPf1ikA+VLeqp/9Y4aemhAdGLtSNX+It67Ck0emhH+aC01+O1RMiBFOBx0xpEo87GVpmn73RwTjGdhqx2PIjIH7LW80JEDqXqs8gw6dCqiDl5lzmwMl/V7nYl3pzlIsmgJUg5twJUigYUff7Ykw3qomD80BKkj1sBKkYDioAWtQJUDgZUk/ZFA0oXMuEHimAsWitAobn/XGICpQta8ANFwILbAMpgXq7DtcmAIojEWgFK33rsQNGzf+dkQMvMix1aUPZvMC92oAjYfytA6cyLHShQLm8wL3agQLm8wby4gbKKMgIAZTAvdqBAubzOvJwVg8mAIuDyPIRC52r80NLnwJm4Gj+0oPGCztX4gSKIF1oBKkcDCpT961yNHyhQ9q9zNX6gQNm/ztX4gQJl/wZXcxUCoquqg9J/g3rxIwXK/w0mxY+UhwDAixiMSb74sQWlqQap4EcKlKcarIIfKVCiatAKfqRAmarOK5wFweiQAqWqOq8AQMpDMy7PacmPrYc5Pr6wzeGwBeXDOhMBQIqAD7eCFJwvFKDdLToTAUAKNA4wmAh3w5SwzuNGQErfffxIgcYBeoYDACnQOMBgYPxIgcYBBp/iRwo0Z23wKXakXASPIPgUP1KgHN3gU/xIgXJ0g09xt0EJCcrRDT7FjxQoRzf4FD9SoBzd4FP8SKFydJ1P8SMFytENPsWPFChHN/gUP1KgHN3gU+xIuWgyIZx9MXerklCgHF0/+wCQAuXo+tkHgBQoR9fPPgCk6Af2OE/Ob/e0BMAWlNXrpyUAUvSTyd/rBB6h6CfL+HocCe0hS6HoJ5MzYcvewi48DOnz5RPQpo2ImCA6wXhEkd8nxKDxDF4dKwaNZ/DqWDFoPINXx4pBaw54GaoYNDrBy3rGoDUHvEx6fCY1B/6IN/YQOfjpzAbkFcFEDnhMJKGPHDxhi8ddEoLIwVnnLIkul/oGobKdRn0xvDw3ALb0U3l8+Vs4RuVBUIEJW34OlniY++/HJ+BVdBLQmAlu6LpIQGMmuKnrIkHt04Kr+SUEEVArSMFVQlLQPi24wesiBa1rwA1KFSlBrEGy+/iRAq1r4LGEFLSugccSUtS6BhxLSEE5OiBLAOXogCwBlKMDsoRz4ejsSGWgHB1OHUNkoBwdTnBFZKAcHW7cuchQOTravHORgXJ0uIHngkJxl+LsA0AKlKPDjTwXFAq6JEixT3FFFcTFmwyMqoiLN236XCRx+ScXomri4uU8UUVx8XKeqKq4eDlPVFlcvJwnhS5uG0jh5TxRZW7xcp6oOrd4OU9UoVu8nCeq0i1ezhNV6hYv54mqdYuX86QQu23Fo8PlPCm0aynOPgCkQDk6Xs4TVYsWL+cJK0YLl/OEVaOFy3miytHi5TxR9WjhdG0kqh4tnFaS7PRonZEC5ehwGscyAuXocLrZElWHF04JVKLq8MIplUlUHV44pTIJq8OLxxJAOTpctV1GoBwdrtouKVR120AKrtouYTVy0artElUjF67aLs9FI5e93idRNXLhqu0SVSMXrtouYTVy0art8lw0ctmrWPJcNHIBkALl6HDVdnkuGrkASIFydLhqu2yAgUeKPTuMqpELV22XqBq5cNV2iaqRC1dtl6gauXjVdliNXLg8OqpGLl61HVUjF6/ajqqRi1dtR9XIxau2o2rk4lXbUTVy8artFBq5JEjx7z5Qjq6zBACbAuXoenYYAClQjq5nhwGQQuXocL0uFNqyFDbFj1QMytHxel1gdV/hel1QdV/xel3ORfeVv9qOqvuK1+uCqvuK1+uCqvuK1+uCqvuK1+tCoftKsfsAkALl6Hi9LhSarBRnHwBSoBwdr9eFQi+VBCn22gyF+inJ7uNHCpWjw9X7KJRJKZDirzig6ozi9bqg6ozi9bqg6ozi9bqg6ozi9bqg6ozi9bqg6ozi9bqg6ozi9bqg6ozi9bqg6ozi9bqg6ozi9bqg6ozi9bqg6ozi9bqg6owaNsVem0HVGdWRitlznrA6o3D9U7A6o3D9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U6g6o3j9U7A6o3A1ZFSdUbz+KVSdUbz+KVidUbizD1VnFK9/ClVnFK9/ClVnFK9/ClVnFK9/ClVnFK9/ClVnFK9/ClVnFK9/ClVnFK9/ClVnFK5/SqHqjML1TylYnVG0/imFqjMK1z+lUHVG4fqnFKrOKFz/lELVGYXrn1KoOqNw/VMKVWcUrn9KoeqMwvVPqehMODo/Uqg6o3D9UwpVZxSuf0qh6ozC9U8pVJ1RuP4pdS46owBIgXJ0uP4phaozCtc/pVB1RuH6pxSqzihc/5RC1RmF6wpS56IzCoDUmXB0/qxLAww8UuxxH6rOKF4uAVVnFK/icC46owBIgXJ0vIoDqs4oXsUBVWcUr+KAqjOKV3FA1Rk1PDo/UqAcHS/niaozipdHR9UZxYv7UHVG8eI+WJ1RuLgPVmcULu5D1RnFi/tQdUbx4j5UnVG8uC8G5eh4cd/Z6IzyIwXK0fHiPlSdUby4D1VnFC/ug9UZhYv7Op1RZ6RAOTpe3HcuOqMAuw+Uo+PFfeeiMwqAFChHx4v7zkVnlN+jn4vOKABSoBwdL+47F51R/mgGVWcUL+6j0BkdjSeTq2pSzVYfV8OizEeD+vp8Mav+KLdeSQd52R+1hS1ctzGFMikPtvqMK/7pFRRapkzYpmjYUqifatiORiM5sGI7TPtp7b4DneCqKPRSMfwt/3MwFAqrGD4BAFuCeAbjLAPAliACwsAWwN8SxEwg2LJngyiUYjH8LQC2wcRlcLNMFYUaLQYHA7Bb+rjMU+wANy9VUSjeMmELN2GVQiOXySfAzWSlUNXFwJY/V0Ohw8uEre4T+O02mLhMxxbAboOJy3QOBoBtMHGZwW/5fUK4cRk/tsHEZUauhh/bYOpleB3BFCrHINiy2y2FLjIGBwPANpy4DK5PmUJ7mQdbvM5mCrVmJmzheqEp9J15sMXrnqZQhAbBlv8sCyYuw+vQplCdBjnL+LENJi7D6wKnULbG4LcA2IYTl8GpflCoZ2OcZQDYBhOX4alAUCh0g2DL3iNKoemNgS1/PxiFCjiGvwXANti4DMAn0MdlTM/pAGDroY8xH5R2bPt5EicRUS2SPQ8WU6iZM2GLlr+NKfTPebDVez/4sfXAEzxhG6HlE2IPPIEJW/YcY+yDJzBhy+8TguEJBrbcsUPigSdcR70o7/nnYPzYBsMTDA7Gj60MBVuDg/Fj6yHHmJT5MLZhm8v+8qFXmnxCzB3zJpGHHKMfbPUcIwC2HnKMnrDVn4Hix9ZDjpEJW+64LIk89H7wYMvek59EwfR+GNhyx2WJCKb3Q8eWPeZNRDC9Hwa27D5B0McOTNiy5xgTEU7vh44tOwcT4fR+6GcZe8wrgun9MJ4v48c2mN4P4yzjxzaY3g/jeQd+bMPpyYfL34pg4jLjeQd2bGUwcZnxzCk/th7isveaG2+gPH9s8XLj0kNc9m7tNph6GaDdBlMv0+024T/LwqmXRXDYBlsvA8A2nHoZ2jzGRHqIy3iw5a/zKg9xGRO27DxBBROX6djy+1vlIS57pxpbiQqmXgansZWoYOplcPNvExVsvYwfWwqlchB/y84TKLTNQfwtO7YUaugg/pYf22D6wQx/y49tMPzW0CDgxzYYfqvXeQGwDYbf6jyBP+al0IHH4AkA2AbTD6bzBABsg+kH03kCALbBxGU6TwDANpi4TOcJ/Nh60Jp/t/234WjN4z0DFY7WPN4zUOFozev1MgCfEExcZmDL7xOCicsMbPl9QjBxmY4t/3OR4WjNG9iy58HC0Zo3+sH4z7Jg4jIDW/azzIPWvCdsjVmX7D4hHK15HVt+nxCO1ryBLb9PCCYuM7Bl57fhaM3r2PL3LIWjNW9gy3+WBROX6dj6zI3f3f/jt0zdPP1SfhW/j9Iv8x/l00cPJZ3r66s0T2zQavIOBo4WtN1LOj4pmBVaD1mwPL287gkbtFpH/knQ5s2M5BeHcNHYDdtMbIo5S8Nifrf8YvXqNtTTalo+o/zvzZ8sLpK2OhSaP+Y5gxsZ6Da1gG10m2uto5tSTGTkQldmu+jK1LRdz+gS5BS40H0Wu2tYbu4PXavb9RCcOQ90P8ntpk2b0AbZrGd6Ba8HmofQzA9X6OURGLL0wYMfqiBUM98bBlr62KFfDPKhskErVRwnw5agTVWGBS2QUlG7jrZ5bo7NZi3AppPFEp+q/pO2EU7/+1g1L3ycj/9XX/1Uv0Gqh28rcJrX669ul//+Uk6Kxbiarv66xd14ukJqcfexhigt7pfrM+3PH9afmKw/eb3+qesbGEs8qx6nw2fmcWDBd1lKfdvr4n48WeJ+Vd2PB/Wdb4rpvP7np5vNG25Wf9Nnke/hMHupjnaxqn+n8WL5k1qymqxRN2msJpaG1cimQ33bbJpt0r7Z7Leb+UMxPclu/lnMbsul6c1eDGjXeBpjWf+ozlheNxZLLlhmXo3FFuF5MRbZGcuhLKHUT3p2a7FFrF6sRXXWcohyJzrl5raW3BYoElrLqAZYRpfFbWcrh2KIzJ9nqb+dVcu1e37t+xrbu5+qYbl8x/8B</diagram></mxfile>
|
2210.16541/main_diagram/main_diagram.pdf
ADDED
|
Binary file (57 kB). View file
|
|
|
2210.16541/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Relation Extraction (RE) aims to detect the semantic relations between a pair of target entities in a given text, which has long been a fundamental task in natural language processing (NLP). Most of RE studies are under the assumption that entity pairs are within a sentence (i.e., sentence-level RE) [@zeng-etal-2014-relation; @dos-santos-etal-2015-classifying; @cai-etal-2016-bidirectional; @zhou-etal-2016-attention; @zhang-etal-2018-graph; @FeiZRJ21] or a document (i.e., document-level RE) [@christopoulou-etal-2019-connecting; @nan-etal-2020-reasoning; @zeng-etal-2020-double; @li-etal-2021-mrn; @FeiDiaREIJCAI22; @DBLP:conf/ijcai/ZhangCXDTCHSC21]. Another line considers the research of cross-text RE, where entity pairs are separated into different text units, (i.e., cross-sentence RE or N-ary RE) [@PengPQTY17].
|
| 4 |
+
|
| 5 |
+
<figure id="fig:example" data-latex-placement="ht">
|
| 6 |
+
<embed src="data_example_2.drawio.pdf" style="width:100.0%;height:32.0%" />
|
| 7 |
+
<figcaption> An example to show the setting of cross-document RE. In this document bag, there are three text paths to imply the <em>allegiance</em> relation between the head entity <em>Peter Kappesser</em> and tail entity <em>U.S.</em>. Each text path has two documents, where one contains the head entity and the other one contains the tail entity. In each text path, the head and tail entities are bridged by another entity appearing in both documents (e.g., <em>Civil War</em>). </figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
The latest RE research has moved to cross-document RE (CodRE), i.e., the target entities are located in different documents [@yao-etal-2021-codred]. As exemplified in Fig. [1](#fig:example){reference-type="ref" reference="fig:example"}, a CodRE model needs to first retrieve the relevant documents and then recognizes the key **text paths** in these documents for relation reasoning. In @yao-etal-2021-codred, the task is formalized based on the idea of distant supervision [@mintz-etal-2009-distant], i.e., the text paths in a bag can facilitate the relation reasoning and thus their model performs bag-level prediction over all the text paths. Unfortunately, their method may suffer from at least two problems, which inevitably hinder the accurate relation inference.
|
| 11 |
+
|
| 12 |
+
First, the inputs of their method are not tailor-made for cross-document RE. For instance, they extract text snippets surrounding the two target entities in the document as input of a bag, brings much noisy and non-relevant context information. Moreover, they ignore important bridge entities in the text paths of the bag, leading to the loss of instructive and salience information for cross-document RE. As can be seen in Fig. [1](#fig:example){reference-type="ref" reference="fig:example"}, the sentences containing bridge entities are necessary to reason the relations between target entities and missing them will seriously affect the reasoning process.
|
| 13 |
+
|
| 14 |
+
Second, their method does not make full use of the connections between text paths. For example, the pipeline model proposed by @yao-etal-2021-codred simply leverages the information of the text path in an isolated way, lacking deep consideration of the global connections of all text paths. In contrast, although their end-to-end model [@yao-etal-2021-codred] uses the context of all the text paths, the process of synthesizing the context is coarse-grained. The connections across multiple text paths are actually beneficial for cross-document RE. As shown in Fig. [1](#fig:example){reference-type="ref" reference="fig:example"}, the entity "*Medal of Hornor*" provides an additional link for different text paths, which helps to reason the "*allegiance*" relation between "*Peter Kappesser*" and "*U.S.*".
|
| 15 |
+
|
| 16 |
+
Therefore, in this paper, we focus on addressing the above problems and improving the performance of cross-document RE by presenting a novel **E**ntity-based **C**ross-path **R**elation **I**nference **M**ethod (**ECRIM**). First, we propose an **entity-based document-context filter** to elaborately construct the input for our cross-document RE model, which includes two steps: **1)** We filter out a number of sentences based on their scores with regards to bridge entities. Three heuristic conditions are used to describe the importance scores of bridge entities and then these scores are assigned to the sentences for filtering. **2)** After filtering out the sentences with lower scores, we use the semantic-based sentence filter to reorder the remaining sentences, making them into a relatively coherent document, inspired by the method of sentence ordering in multi-document summarization. [@DBLP:journals/corr/abs-1106-1820; @DBLP:journals/corr/abs-1909-10393].
|
| 17 |
+
|
| 18 |
+
After input construction, we propose a novel cross-document RE model that is equipped with a **cross-path entity relation attention** module to capture the connections of text paths within a document bag, inspired by @10.1145/1553374.1553534 [@DBLP:journals/corr/abs-1906-04881]. Specifically, we build a relation matrix where each unit represents a relation between two entities belonging to the same bag. Then the bag-level relation matrix is able to capture the dependencies between the relations by the attention mechanism [@NIPS2017_3f5ee243], which allows one relation to focus on other more relevant relations in the text paths by modeling the discourse structure [@FeiMatchStruICML22; @0001RJ20a; @Wu0LZLTJ22].
|
| 19 |
+
|
| 20 |
+
We conduct experiments on the CodRED dataset [@yao-etal-2021-codred]. The results show that our model outperforms the baseline models by a large margin. In summary, our contributions can be summarized as follows:
|
| 21 |
+
|
| 22 |
+
- We apply an entity-based document-context filter to retain useful context information and important bridge entities across the documents.
|
| 23 |
+
|
| 24 |
+
- We propose a cross-path entity relation attention model for cross-document RE, which allows the relation representations across text paths to interact with each other respect to bridge entities.
|
| 25 |
+
|
| 26 |
+
- We validate the effectiveness of our model, which significantly pushes the state-of-the-art performance for cross-document RE.
|
| 27 |
+
|
| 28 |
+
<figure id="fig:framework" data-latex-placement="!t">
|
| 29 |
+
<embed src="framework_new7.pdf" style="width:100.0%" />
|
| 30 |
+
<figcaption> The overall architecture of our system. (a) utilizes a entity-based document-context filter to select the sentences that are relevant to the target entity pair (cf. Section <a href="#sec:entity_document_filter" data-reference-type="ref" data-reference="sec:entity_document_filter">3.1</a>). (b) yields entity embeddings from contextualized word representations (cf. Section <a href="#sec:encoder" data-reference-type="ref" data-reference="sec:encoder">3.2</a>). (c) leverages the cross-path entity relation attention to capture the connections between the entities and relations of all the paths in the bag (cf. Section <a href="#sec:cross_path_attention" data-reference-type="ref" data-reference="sec:cross_path_attention">3.3</a>). (d) aggregates the predictions of all the paths to get a bag-level prediction. </figcaption>
|
| 31 |
+
</figure>
|
| 32 |
+
|
| 33 |
+
# Method
|
| 34 |
+
|
| 35 |
+
**Task Definition** Given a target entity pair $(e^h, e^t)$ and a bag of $N$ text paths $B=\left\{ p_i \right\}_{i=1}^N$, where each path $p_i$ consists of two documents $(d^h_i, d^t_i)$ mentioning the head entity $e^h$ and the tail entity $e^t$ separately, the task aims to infer the relation $r$ from $\mathcal{R}$ between the target entity pair, where $\mathcal{R}$ is a pre-defined relation type set. When multiple mentions of one entity (subject to entity ID) appear in two documents respectively, this entity is said to be shared by two documents. Note that the two documents in every path may share multiple entities $E^b_i = \{e^b_i\}^M_{i=1}$, in the following we call them bridge entities.
|
| 36 |
+
|
| 37 |
+
**System Overview** As shown in Fig.[2](#fig:framework){reference-type="ref" reference="fig:framework"}, the model consists of four tiers. First, an entity-based document-context filter receives text paths as inputs, where each of them is composed of two documents. The filter removes less relevant sentences from the text paths and reorganizes the remaining sentences into more compact inputs for subsequent tiers. Afterward, a BERT encoder yields the representations for tokens and entities. Then the cross-path entity relation attention module builds a bag-level entity relation matrix for capturing the global dependencies between the entities and relations in the bag, and outputs the entity relation representations of all text paths. Finally, we use a classifier to aggregate these representations and predict the relation between head and tail entities.
|
| 38 |
+
|
| 39 |
+
Since the average length of a document in CodRED is more than 4,900 tokens and BERT has a length limitation (512 tokens) for input, it is infeasible to handle all sentences in a text path simultaneously if the total length of all the input exceeds the limitation. To solve this problem, we propose an entity-based document-context filter to select salient sentences in a document for each path.
|
| 40 |
+
|
| 41 |
+
<figure id="fig:co-occur" data-latex-placement="!t">
|
| 42 |
+
<embed src="co-occur4.pdf" style="height:28.0%" />
|
| 43 |
+
<figcaption> An example of the co-occurring graph for Path 1 and Path 2 in Fig.<a href="#fig:example" data-reference-type="ref" data-reference="fig:example">1</a>. The score of “<em>Civil War</em>” is obtained by aggregating the scores obtained from three conditions <span class="math inline"><em>Γ</em><sub>1</sub>, <em>Γ</em><sub>2</sub>, <em>Γ</em><sub>3</sub></span> as shown in Equation (<a href="#eq:score_e" data-reference-type="ref" data-reference="eq:score_e">[eq:score_e]</a>). </figcaption>
|
| 44 |
+
</figure>
|
| 45 |
+
|
| 46 |
+
For each path $p$, we have a collection of entities $E^b$ shared by the two documents $(d^h, d^t)$ of this text path. These bridge entities can be utilized as a link in reasoning about the relation between head/tail entities. Moreover, the bridge entity collections can be regarded as a latent indicator to measure the distribution similarity between different text paths. Thus, we first filter out a number of sentences based on their scores, which are computed by three heuristic conditions. Then we use a semantic-based sentence filter to reorder the selected sentences to construct a coherent document whose length is less than 512.
|
| 47 |
+
|
| 48 |
+
The basic assumption of this module is that *If a sentence includes entities that co-occur with a target entity, the sentence is informative for relation reasoning*. Thus our first filtering procedure is to select those informative sentences with prior distribution knowledge of bridge entities. To this end, we use three steps:
|
| 49 |
+
|
| 50 |
+
**Step 1:** We calculate the co-occurring score for each bridge entity. We design three heuristic conditions from strong to weak to describe the different levels of co-occurring situations:
|
| 51 |
+
|
| 52 |
+
- **Direct co-occur** $(\Gamma_1)$: Whether it co-occurs with the head/tail entity in the same sentence.
|
| 53 |
+
|
| 54 |
+
- **Indirect co-occur** $(\Gamma_2)$: Whether it co-occurs with another entity meets the first condition.
|
| 55 |
+
|
| 56 |
+
- **Potential co-occur** $(\Gamma_3)$: Whether it exists in other text paths.
|
| 57 |
+
|
| 58 |
+
Formally, for a bag of $N$ text paths, we score for each bridge entity $e^b$ in each text path $p_i$ by: $$\begin{equation}
|
| 59 |
+
\setlength\abovedisplayskip{3pt}
|
| 60 |
+
\setlength\belowdisplayskip{3pt}
|
| 61 |
+
\label{eq:score_e}
|
| 62 |
+
\text{score}(e^b) = \alpha s_1(e^b) + \beta s_2(e^b) + \gamma s_3(e^b)
|
| 63 |
+
\end{equation}$$ $$\begin{equation}
|
| 64 |
+
\setlength\abovedisplayskip{3pt}
|
| 65 |
+
\setlength\belowdisplayskip{3pt}
|
| 66 |
+
\begin{split}
|
| 67 |
+
s_1(e^b)= \left \{
|
| 68 |
+
\begin{array}{ll}
|
| 69 |
+
1, & \text{if} \ \Gamma_1(e^b)\\
|
| 70 |
+
0, & \text{otherwise}
|
| 71 |
+
\end{array}
|
| 72 |
+
\right.
|
| 73 |
+
\end{split}
|
| 74 |
+
\end{equation}$$ $$\begin{equation}
|
| 75 |
+
\label{eq:s2}
|
| 76 |
+
\setlength\abovedisplayskip{3pt}
|
| 77 |
+
\setlength\belowdisplayskip{3pt}
|
| 78 |
+
\begin{split}
|
| 79 |
+
s_2(e^b)\!=\! \left \{\!
|
| 80 |
+
\begin{array}{ll}
|
| 81 |
+
|\left\{e^o|\Gamma_1(e^o)\land {\rm I}(e^o)=1\right\}|,\!\text{if} \ \Gamma_2(e^b)\\
|
| 82 |
+
0, \qquad \qquad \qquad \qquad \qquad \text{otherwise}
|
| 83 |
+
\end{array}
|
| 84 |
+
\right.
|
| 85 |
+
\end{split}
|
| 86 |
+
\end{equation}$$ $$\begin{equation}
|
| 87 |
+
\label{eq:s3}
|
| 88 |
+
\setlength\abovedisplayskip{3pt}
|
| 89 |
+
\setlength\belowdisplayskip{3pt}
|
| 90 |
+
\begin{split}
|
| 91 |
+
s_3(e^b)= \left \{
|
| 92 |
+
\begin{array}{ll}
|
| 93 |
+
|\left\{p_j| e^b \in {\rm E}_{j}^{b}\right\}|, & \text{if} \ \Gamma_3(e^b)\\
|
| 94 |
+
0, & \text{otherwise}
|
| 95 |
+
\end{array}
|
| 96 |
+
\right.
|
| 97 |
+
\end{split}
|
| 98 |
+
\end{equation}$$ where $\alpha, \beta, \gamma$ are hyper-parameters. ${\rm I}(e^o)=1$ while $e^o$ and $e^b$ co-occur in the same sentence, where $e^o \in E_i^b \setminus \left\{e^b\right\}$. equation([\[eq:s2\]](#eq:s2){reference-type="ref" reference="eq:s2"}) sums number of these $e^o$, equation([\[eq:s3\]](#eq:s3){reference-type="ref" reference="eq:s3"}) sums number of these $p_j$.
|
| 99 |
+
|
| 100 |
+
**Step 2:** We compute the importance score $g^s$ of each sentence $s$ by summarizing all the scores of the bridge entities that it contains: $$\begin{equation}
|
| 101 |
+
\setlength\abovedisplayskip{3pt}
|
| 102 |
+
\setlength\belowdisplayskip{3pt}
|
| 103 |
+
g^s = \sum_{e^b \in {E}^{b}_s} \text{score}(e^b)
|
| 104 |
+
\end{equation}$$ where ${E}^{b}_s$ denotes the bridge entities mentioned in the sentence $s$.
|
| 105 |
+
|
| 106 |
+
**Step 3:** We rank the sentences by their importance scores from large to small and select the top $K$ sentences as the candidate set $S=\left\{s_1,s_2,...,s_K \right\}$, where $K$ is a hyper-parameter. In our implementation, the candidate set size $K$ is set to 16 based on the experiments on the development set. If there are several sentences with the same score, the priority is determined according to the distances from these sentences to the sentence with the highest score.
|
| 107 |
+
|
| 108 |
+
After the entity-based sentence filtering, we take the semantic relevance of sentences into account to further filter and reorder candidate sentences, with the assumption that *if a sentence is semantically similar to the sentence including target entities, this sentence should be more informative for relation reasoning*. The goal of this step is to yield the most informative context $S^*$ from the candidate sentence set $S$, for reasoning the relation between target entities.
|
| 109 |
+
|
| 110 |
+
The procedure of semantic-based sentence filtering is summarized as Algorithm [\[Alg: SentFilter\]](#Alg: SentFilter){reference-type="ref" reference="Alg: SentFilter"}, which aims to construct the sequence $S^*$ from the candidate sentence set. As seen, besides the candidate set $S$, head entity $h$ and tail entity $t$, the inputs of the algorithm also include a start set $S_{start}$ and an end set $S_{end}$ that consist of all the sentences containing the head and tail entity, respectively. At the begging of the algorithm, we first randomly select a sentence from $S_{start}$ (line 1). Then we search for the most relevant sentence to this sentence and append it to the output $S^*$. We repeat such a process until the current selected sentence includes the tail entity (lines 3-12). Finally, we obtain the sequence $S^*$ with $K^*$ sentences, where $K^* \leq K$. Specifically, we use the cosine similarity calculated by SBERT-WK [@9140343] to measure the semantic relevance between two sentences. If the length of the sequence $S^*$ is larger than 512, we will keep dropping the sentences with lower similarity scores until the length of the sequence meets the demand of BERT.
|
| 111 |
+
|
| 112 |
+
:::: algorithm
|
| 113 |
+
::: algorithmic
|
| 114 |
+
Candidate set $S=\left\{s_1,s_2,...,s_K \right\}$; head entity $h$; tail entity $t$; Start set $S_{start}=\left\{s_i|h \in s_i \right\}$; End set $S_{end}=\left\{s_j|t \in s_j \right\}$; Sequential sentences $S^*$ $S^*=\left[\quad \right]$; $cur={\rm Random}{(S_{start}, 1)}$ $next=\phi$; $max=0$
|
| 115 |
+
|
| 116 |
+
**while** $\left\{cur\right\} \cap S_{end} = \emptyset$ **do** $S = S-\left\{cur\right\}$ **for** $s_i \in S$ **do** **if** Sim($cur, s_i$) $> max$ $max \gets$ Sim($cur, s_i$) $next \gets$ $s_i$ **else** continue $S^* =$ Append($S^*, next$) $cur \gets next$ **return** $S^*$
|
| 117 |
+
:::
|
| 118 |
+
::::
|
| 119 |
+
|
| 120 |
+
After input construction, we have filtered sentence set $S^*$ from each text path, we concatenate sentences in $S^*$ together to build the input of our model as $X=\left\{ w_{i} \right\}_{i=1}^L$. Following @yao-etal-2021-codred, we apply unused tokens in the BERT vocabulary [@DBLP:conf/naacl/DevlinCLT19] to mark the start and end of every entity. Then we leverage BERT as the encoder to yield token representations: $$\begin{equation}
|
| 121 |
+
\setlength\abovedisplayskip{3pt}
|
| 122 |
+
\setlength\belowdisplayskip{3pt}
|
| 123 |
+
\left\{ \bm{h}_i \right\}_{i=1}^L={\rm BERT} (\left\{ w_{i} \right\}_{i=1}^L)
|
| 124 |
+
\end{equation}$$ Based on $\left\{ \bm{h}_{i} \right\}_{i=1}^L$, we can obtain the entity representations with the max-pooling operation: $$\begin{equation}
|
| 125 |
+
\setlength\abovedisplayskip{3pt}
|
| 126 |
+
\setlength\belowdisplayskip{3pt}
|
| 127 |
+
\bm{e}_j={\rm Max}\left\{\bm{h}_{i} \right\}_{j=start_j}^{end_j}
|
| 128 |
+
\end{equation}$$ where $start_j$ and $end_j$ are the start and end positions of the $j$-th mention.
|
| 129 |
+
|
| 130 |
+
Since prior studies only treated each text path as an independent instance, the rich information across text paths was ignored. Therefore, we aim to mine this information. Inspired by @DBLP:journals/corr/abs-2006-03719 and @DBLP:conf/ijcai/ZhangCXDTCHSC21, we introduce a cross-path entity relation attention module based on the Transformer [@NIPS2017_3f5ee243] to capture the inter-dependencies among the relations across paths.
|
| 131 |
+
|
| 132 |
+
Concretely, we first collect all the entity mention representations in a bag and then generate relation representations for entity pairs: $$\begin{equation}
|
| 133 |
+
\setlength\abovedisplayskip{3pt}
|
| 134 |
+
\setlength\belowdisplayskip{3pt}
|
| 135 |
+
\bm{r}_{u,v}={\rm ReLU}({\bm{W}_r}({\rm ReLU}({\bm{W}_u}{\bm{e}_u}+{\bm{W}_v}{\bm{e}_v})))
|
| 136 |
+
\end{equation}$$ where $\bm{W}_r$, $\bm{W}_u$, $\bm{W}_v$ are learnable parameters. Afterward, we extend the relation matrix proposed by @DBLP:journals/corr/abs-2006-03719 at the bag level, as shown in Fig. [2](#fig:framework){reference-type="ref" reference="fig:framework"}(c). In order to modeling the interaction among relations across paths, we build a relation matrix $\bm{M} \in \mathbb{R}^{|E| \times |E| \times d}$, where $E=\bigcup_{i=1}^{N}E_i$ denotes all the entities in the entity set $E_i$ of text path $p_i$ and $E_i = \{e_i^h, e_i^t\} \cup E^b_i$.
|
| 137 |
+
|
| 138 |
+
To capture the intra- and inter-path dependencies, we leverage a multi-layer Transformer [@NIPS2017_3f5ee243] to perform self-attention on the flattened relation matrix $\hat{\bm{M}} \in \mathbb{R}^{|E|^2 \times d}$: $$\begin{equation}
|
| 139 |
+
\setlength\abovedisplayskip{3pt}
|
| 140 |
+
\setlength\belowdisplayskip{0pt}
|
| 141 |
+
\hat{\bm{M}}^{(t+1)} = {\rm Transformer}(\hat{\bm{M}}^{(t)})
|
| 142 |
+
%{\cal \hat{M}}^{(t+1)} = {\rm SelfAtt}({\cal \hat{M}}^{(t)})
|
| 143 |
+
\end{equation}$$
|
| 144 |
+
|
| 145 |
+
Finally, we obtain the target relation representation ${\bm r}_{h_i,t_i}$ for each path $p_i$ from the last layer of the Transformer, as shown in Figure [2](#fig:framework){reference-type="ref" reference="fig:framework"}(c).
|
| 146 |
+
|
| 147 |
+
Afterwards, we yield the relation representation ${\bm r}_{h_i,t_i}$ from each text path $p_i$ for each pair of target entities. Then we use the ${\bm r}_{h_i,t_i}$ as the classification feature and feed it into an MLP classifier for calculating the score of each relation: $$\begin{equation}
|
| 148 |
+
\setlength\abovedisplayskip{3pt}
|
| 149 |
+
\setlength\belowdisplayskip{3pt}
|
| 150 |
+
%{\rm Logits}(p_i) = {\rm MLP}(r_{ht}^i)
|
| 151 |
+
%{\cal \hat{Y}}_i= {\rm MLP}(r_{ht}^i)
|
| 152 |
+
{\hat{y}}_i= {\rm MLP}({\bm r}_{h_i,t_i})
|
| 153 |
+
\end{equation}$$ To get the bag level prediction, we use the max-pooling operation on each relation label to yield the final score for each relation type $r$: $$\begin{equation}
|
| 154 |
+
\setlength\abovedisplayskip{3pt}
|
| 155 |
+
\setlength\belowdisplayskip{3pt}
|
| 156 |
+
%{\rm Logits}(B) = {\rm Max}\left\{ {\rm Logits}(p_i) \right\}_{i=1}^N
|
| 157 |
+
%{\cal \hat{Y}}^{(r)} = {\rm Max}\left\{ {\cal \hat{Y}}_i^{(r)} \right\}_{i=1}^N
|
| 158 |
+
{\hat{y}}^{(r)} = {\rm Max}\left\{ {\hat{y}}_i^{(r)} \right\}_{i=1}^N
|
| 159 |
+
\end{equation}$$
|
| 160 |
+
|
| 161 |
+
After obtaining the scores for all relations, we utilize a global threshold $\theta$, which will be stated in Section [3.5](#sec:loss fuction){reference-type="ref" reference="sec:loss fuction"}, to filter out the categories lower than the threshold.
|
| 162 |
+
|
| 163 |
+
Since some bags have multiple relation labels, we adopt a multi-label global-threshold loss, which is a variant of the circle loss [@circleloss], as our loss function. To this end, we introduce an additional threshold to control which class should be output. We hope that the scores of the target classes are greater than the threshold and the scores of the non-target classes are less than the threshold. Formally, for each Bag $B$, we have: $$\begin{equation}
|
| 164 |
+
\setlength\abovedisplayskip{3pt}
|
| 165 |
+
\setlength\belowdisplayskip{3pt}
|
| 166 |
+
\begin{split}
|
| 167 |
+
% \mathcal{L}_B = &log(e^\theta + \sum_{r\in \Omega_{neg}^B} e^{{\cal \hat{Y}}^{(r)}}) \\
|
| 168 |
+
% &+log(e^{-\theta} + \sum_{r\in \Omega_{pos}^B} e^{{\cal \hat{Y}}^{(r)}})
|
| 169 |
+
\mathcal{L} = \ &log(e^\theta + \sum_{r\in \Omega_{neg}^B} e^{{\bm{\hat{y}}}^{(r)}}) \\
|
| 170 |
+
&+log(e^{-\theta} + \sum_{r\in \Omega_{pos}^B} e^{{\bm{\hat{y}}}^{(r)}})
|
| 171 |
+
\end{split}
|
| 172 |
+
\end{equation}$$ where ${\bm{\hat{y}}}^{(r)}$ denotes the score for the relation $r$, $\theta$ denotes the threshold and is set to zero, $\Omega_{pos}^B$ and $\Omega_{neg}^B$ are the positive and negative classes between the target entity pair.
|
2211.11979/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-11-21T10:23:47.001Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" version="20.4.0" etag="NyDs_7yTrbDMijIcz6Pd" type="google"><diagram id="c3TvqPgj_etSs4XCPg66">7Vxdc5s4FP01fiwj9AU8Nk7S7kzbzbQPu/vUkY1ss8XgAblx9tevZMAGSY6JLVKniaczBSEJce7RvVcHkREaLzcfCrZafM5jno4giDcjdD2C0AcolP+pkoe6hIRRVTIvkrgu2xd8S/7jTdO6dJ3EvOxUFHmeimTVLZzmWcanolPGiiK/71ab5Wn3ris250bBtylLzdK/klgsqtKQgH35R57MF82dfVBfWbKmct1FuWBxfl8VbeugmxEaF3kuqqPlZsxThV6DS9XR7YGru4EVPBN9GsCqwU+Wrutnq8clHpqHLfJ1FnNV3x+hq/tFIvi3FZuqq/fSvrJsIZZpfbkURf5jB4oqiVm52LWWBhEsyXghz4E8nyVpOs7TvNjeCt2G45vxeNdN60pMJ5TQbQ9pylZlMtkOT/VRPwEvBN8cRMHfYStZyfMlF8WDrFI3ICSomtSEfEfCwCNV0f3ewLAx46Jl3KCux2pOzXe972GXBzXydisgixUoWypss0mp/vv49fN5htGQns04nU6tSAfRBLjCNQAarpR60AQWBSauGHoOkMUGsu/dAnl7G8mfDchJSDBxBGQAdIJSE0VsYacbFMlxL8Gz+L3yrPJsmrKyTKZd2LoY800i/m4d/6PmshxpdXa9qaf29uShOcnkqP9un7RaqdN9s+1Z064aKo8Nl34U/Ba2xIJtU1bwlInkZ7d7G9z1He7yRN54Z1tq2BZrJivzdTHldbO2M9d7wsd6EqyYc2H0tLX/7sF7UYK+UWJASgSuKGE6juEoEbxRYjhKBObcRidSIjR6MoKEO1KEx0nRsr8KkonMsz+xCU/v8jIRSZ7Ja5NciHwpK7A0mauCqbSKyiKvUlXzik1/zLfc6aQ46tfq9H3dVuQra6ZqRHCAAfWv1JUmSwfthBZUV1bqMZabuVrreEleBl4i09zSY3HsJvibbp0CI/gjC/WasnNCf/RmP/dZMICW1cVQFmwWzQP4Zc9HqOubafC4d5Ynd7xI5EPs1oB7j+3hCHW9dhQGR/z29kzv8oAzrzxkh9eVo+u4qotx+dTXeIOp4ah7p4ZU56Df1+lLYrCHVrWVqlA+MmwKtWGHRONr1eWpIcX3LYTG6h8ZJ5m88HVEbkbB1Rd5KIuEpIYcLoi/r0bBdV1TnwBydgubktF4kyzPuLb4q4sMj6a7q2USx+o21oXlfmJ1XJPvKHIY2V9kRg7iD+R3bNpSHzPJsnwtlK1embkC6mvmCswwMZi5bCLUOULJeVKgDP3wgEA1qBToh5qjpMScMjCwCFY7dfAsK5iC1fVDxpYyIENwxwpWhTkIPnFWZEk2l4ef83gtITjLWBr63I8JD2zoRzRAjLrBGgJ0nO8+tERY6gJqU9W6K/J/+XSbs6q70lQ5mYmEm87VUT6TxbeciXXByxeINzKSAAveARoIbqeKUZNM/mJ5oJ1RNlO3nVI2HLuYnBJhbcpheGJGiSK9J/1NiDsRwX+TlgbkBEa6G8bk1HUGDvS+ItCLFU9dZWBiDvrxkRkNos6r0fOXJT2krhdH0o6DuzRnhnVnRpBD4oJhlscOiAscr6d7aHxPI24n0d8x8h3wANAEHnqMl0/Rdy6GmAjridap729wAD2Kwe6nLRBxRAchqfEAzY1ccQ46FyXtnNO8IHzpsdolszS3QgdyeMaQqWP/BW164Fkri0tngU+gxgISnsYCcwmu9+Qui2/M8prthE6craad9J4c2smmBPYRbmevTbD1kW7fZ9TXLXvb+pjpy5uRLK/PBzOSKTLKhxcJU5VMhXEAKTcmPIyxTVoM4QRRR9KimU34th20xKKbuxAXLdvRvq34VBTPiDPj4cz6woJOQz6ZDYQzQba9BIPh/CbEDZguEKQn9yE9Vc8wN51goy9HCT4hB4d9cHRGE6w1OX9VcBFy3MujHDh5j4mFckPtMTH5sxv2EygHXVPOuZDWUA54URR0aEchetxHWnZaWTKrC2YmjB5VvsLQ8/1o/3PEWdXvYCsrZFO9fr+dS0UuWL3x813k6lOifT75K/J49LbnrG+GanxL95xmsklMFjMpreLVGwporo+iZzRUj91mZ+j/lx7c/MhIRjRQewcww4pQ68lRymUMmWLwxJF1G5ydbiGbAPaaOaRPzN4cMjaaB8/EIXyMQ8bIXHPI9mmrCgIfZKAQ3zcXHREcBAEfQI1EYd9vU3wtTT4pDNg2Be7hZ68PfvKs8DuX8w68hfdw2JbqfA8glx/2tLdcRhfmps1NShie5qYpDrwg0j2177UWyUAbnqs9S8au0frF5kG/HeJHG5zvt52qey9BqaNhqEOKT1XqAni8L4eqh1NZ7MUJWabhwpN3KZqGM/ty9Rnf4WG7msTNmqdFjK+8TOI1S83PK36bmB/4GrD2z4EjCyNdrLyxqWT9ka3W0vK3f67F9uC3Rb5xPPvoOSDy8nT/d7yq2bH/c2jo5n8=</diagram></mxfile>
|
2211.11979/main_diagram/main_diagram.pdf
ADDED
|
Binary file (41.1 kB). View file
|
|
|
2211.11979/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recently there has been tremendous progress in the domain of Graph Representation Learning [@khoshraftar2022survey]. The aim here is to develop novel methods to learn the features of graphs in a vector space. Such approaches have successful applications in the domain of image recognition [@han2022vision], computational chemistry [@DBLP:conf/nips/YingCLZKHSL21], drug discovery [@DBLP:journals/corr/abs-2202-05146] and Natural Language processing [@10.1145/3404835.3462809]. Although effective, the underlying graphs are static in nature.
|
| 4 |
+
|
| 5 |
+
In many real-world scenarios, graphs are dynamic, for example, social networks, citation graphs, bank transactions, etc. For such cases, various approaches have been developed (see survey by @kazemi2020representation). Broadly, these methods aim to learn the evolving nature of graphs through spatial features relying on local neighborhood aggregation (local dependencies) [@evolvegcn; @DBLP:journals/kbs/GoyalCC20]. For example, researchers [@evolvegcn; @shi2021gaen] have resorted to using GNNs along with RNNs to capture the dynamic evolving nature of graphs. With initial successes, these methods are inherently limited to capturing local interactions, missing out on other important information. For example, in the case of a dynamic money transaction graph, having a skewed representation of fraud users among genuine users, there would exist links between these fraud and genuine users, thus, giving rise to high-frequency signals. In the case of local neighborhood aggregation (attending to low-frequency signals), the majority node(genuine) pattern will cause the fraudulent pattern to diminish. Thus, capturing global properties becomes necessary. Here, the global information would help identify the fraud pattern eventually assisting in identifying the criminal. Similarly, in the citation graph, the local properties will help to understand the category of paper. In contrast, global properties will help to understand the amount of interdisciplinary research across research domains. Hence, learning the global dependencies is crucial for dynamic graphs which is a relatively unexplored area in its scientific literature.
|
| 6 |
+
|
| 7 |
+
<figure id="fig:wavelets_motivation" data-latex-placement="ht!">
|
| 8 |
+
<img src="images/sgw_wavelets_vis-Page-3.drawio.png" style="width:88.0%" />
|
| 9 |
+
<figcaption>For Brain dataset <span class="citation" data-cites="DBLP:conf/icdm/XuCLGLNZC019"></span> at a given timestep, the figure (a) shows the learned filter functions at different scales by our proposed framework resembling band reject, low pass and all pass filters. The three diagrams to the right show the corresponding wavelets for a node (colored in yellow in figures (b,c,d) at lowest tip of the Brain) in the brain stem region. The graph nodes indicate the Regions of Interest(ROI) in brain. As scales change, the ROIs that the wavelet focuses on change from concentrated(local interactions) to diffused(global interactions). Moreover, the wavelets respect the brain structure and focus on the concerned region(brain stem in this case) thus mitigating noise due to interactions from unrelated regions.</figcaption>
|
| 10 |
+
</figure>
|
| 11 |
+
|
| 12 |
+
In this paper we aim to encompass global dependencies in dynamic graphs, beyond local neighborhood aggregation, and view it through the lens of spectral graph theory [@hammond2019spectral]. For the same, we introduce a novel concept of *learnable spectral graph wavelets* to capture global interactions of dynamic graph. The concept of learnable wavelet has following motivation: (i) Computing the spectra using full eigen decomposition is computationally expensive and requires $\mathcal{O}(N^3)$ time. Moreover, this gets even more computationally costly by a factor of the number of timesteps($T$) considered for dynamic graphs that evolve over every timestep. Spectral graph wavelets can be computed efficiently in $\mathcal{O}(N+|\mathcal{E}|)$ as we shall see in the following sections. (ii) Wavelets are sparser as compared to the graph Fourier transform adding to the computational benefits [@tremblay2014graph]. (iii) Wavelets give a sense of *localization* in the *vertex domain* of the graphs thus enabling interpretability of the convolutions while also being able to capture the global properties of the graph by changing the scale parameter (c.f., Figure [1](#fig:wavelets_motivation){reference-type="ref" reference="fig:wavelets_motivation"}). iv) Focusing on dynamic learnable wavelets helps in capturing global properties in that evolving spectra as the graph changes with time, where static and non-learnable wavelet methods [@xu2018graph] show empirical limitations.
|
| 13 |
+
|
| 14 |
+
Furthermore, build on the recent success of capturing neighborhood features for evolving graphs, we propose to learn *homogeneous representation* of spatial and spectral features to apprehend both local and global dependencies. Our approach is very intuitive as it keeps the proven local information intact whilst adding the global properties through learnable graph wavelet approach. Similar approaches have been proven in computer vision [@srinivas2021bottleneck] and NLP [@prasad2019glocal] where such restrictive inductive bias work well on learning local properties but miss out on global interactions. Also, the local aggregation leads to problems such as over smoothing and may not work well on heterophilic graphs [@wu2021representing]. Thus, using our methods will help to alleviate the drawbacks of the existing popular methods on dynamic graphs.
|
| 15 |
+
|
| 16 |
+
Our key contributions are two-fold: 1) a novel approach to *learning spectral wavelets* on dynamic graphs for capturing global dependencies (with its theoretical foundations), 2) a novel framework named DEFT that combines spectral features obtained using learnable wavelets into spatial feature of the evolving graphs. For effective use in downstream tasks, DEFT integrates the spatial and spectral features into *homogeneous representations* which allows capturing shift invariance [@oppenheim1975digital] among the node features that could arise from the temporal nature.
|
| 17 |
+
|
| 18 |
+
# Method
|
| 19 |
+
|
| 20 |
+
Consider a graph with vertices and edges as $(\mathcal{V},\mathcal{E})$ and adjacency matrix $A$. The laplacian($L=D-A$) can be decomposed into its orthogonal basis, namely the eigenvectors($U$) and eigenvalues($\Lambda$) as:$L = U \Lambda U^{*}$. Let $X \in R^{N \times d}$ be the signal on the nodes of the graph. The Fourier Transform $\hat{X}$ of $X$ is then given as: $\hat{X} = U^{*} X$.
|
| 21 |
+
|
| 22 |
+
Spectral graph wavelet transforms [@hammond2011wavelets] are obtained by functions of the laplacian $L$. Consider a linear self-adjoint operator($g(L)$) that acts on each component in the spectral space. We define a parameter $s$ for the scaling in the spectral space. The spectral graph wavelet at any given vertex $n$ is defined as the impulse($\delta_n$) response of the wavelet operator at vertex $n$: $$\psi_{s,n}(m) = \sum_{k=1}^{N} g(s\lambda_k) U_k^{*}(n) U_k(m)$$ The n-th wavelet coefficients at scale $s$ represented as $W_f(s,n)$ can be obtained by taking the inner product of the function $f$ in the *vertex domain* with these wavelets as $$\begin{equation}
|
| 23 |
+
\label{wavelet_coeff_prelim}
|
| 24 |
+
W_f(s,n) = \left< \psi_{s,n}, f \right> = \sum_{k=1}^{N} g(s\lambda_k) \hat{f}(k) U_k(m)
|
| 25 |
+
\end{equation}$$
|
| 26 |
+
|
| 27 |
+
In our work, we propose to learn the wavelet coefficients for dynamic graphs, where the exact form of the scaling function $g(s \lambda_k)$ is parameterized.
|
| 28 |
+
|
| 29 |
+
<figure id="fig:overall_architecture" data-latex-placement="ht">
|
| 30 |
+
|
| 31 |
+
<figcaption>Figure (a) shows the DEFT architecture. The yellow outer box shows the modules that learn the weight parameters of the GNNs in an evolving manner. These parameters are then given to the spectral and spatial modules to learn the corresponding features. The learned features then propagates to the <em>homogeneous representation</em> module (HRM) followed by the aggregation module(AM) which together forms the integration module (red outer box) of our framework. Figure (b) explains the components of the spectral module. Similar to image (a), the yellow outer box learns the filter functions for the wavelet approximators.</figcaption>
|
| 32 |
+
</figure>
|
| 33 |
+
|
| 34 |
+
Figure [2](#fig:overall_architecture){reference-type="ref" reference="fig:overall_architecture"} illustrates the proposed DEFT (Dynamic wavElets For global inTeractions) framework that comprises the following modules: 1) Spectral component: focuses on global features of the graph in the form of learnable spectral graph wavelets. 2) Spatial component: the necessity of this component is to mitigate the unsmooth spectrum phenomenon i.e. the node features gets correlated with eigenvectors corresponding to high frequencies causing information loss [@yang2022new]. To resolve this, spatial module focus on the local neighborhood of the graph in addition to that captured by the wavelets. 3) Integration module: Finally, for a *homogeneous representation* of the global properties captured by the spectral component and the local properties learned by the spatial component, we propose a module that learns invariant representations and use these in an aggregation module for downstream tasks.
|
| 35 |
+
|
| 36 |
+
We aim to capture global features without requiring the full eigen decomposition of the laplacian. Here, we propose to approximate the wavelet operator using some function. We use the Chebyshev polynomials to be consistent with the literature [@hammond2011wavelets]. It is known that for any function $h$ with domain $y \in [-1,1]$ there exists a convergent Chebyshev series: $$\begin{equation*}
|
| 37 |
+
h(y) = \sum_{k=0}^{\infty} c_{k} T_k(x)
|
| 38 |
+
\end{equation*}$$ with the Chebyshev polynomials satisfying the recurrence relation $T_k(y)=2yT_{k-1}(y)-T_{k-2}(y), T_0=1, T_1=y$ and the coefficients $c_{k}$ are given by: $$\begin{equation*}
|
| 39 |
+
c_{k} = \frac{2}{\pi} \int_{-1}^{1} \frac{T_k(y)h(y)}{\sqrt{1-y^2}} dy = \frac{2}{\pi} \int_{0}^{\pi} cos(k\theta) h(cos(\theta)) d\theta
|
| 40 |
+
\end{equation*}$$ In order to approximate the function $g(s_j x)$ for each scale $j$, we need to bring the domain $x$ in $[-1,1]$. Noting that $0 \leq x \leq \lambda_{max}$ for the laplacian $L$, we perform the transform $y=\frac{x-a}{a}, a=\frac{\lambda_{max}}{2}$. We now define $\overline{T_k}(x) = T_k(y) = T_k(\frac{x-a}{a})$ and the approximation for $g$ looks as below $$\begin{equation}
|
| 41 |
+
\label{eq_chebyshev_approx}
|
| 42 |
+
g(s_j x) = \sum_{k=0}^{\infty} c_{j,k} \overline{T_k}(x)
|
| 43 |
+
\end{equation}$$ with the coefficients given by, $$c_{j,k} = \frac{2}{\pi} \int_{0}^{\pi} cos(k\theta) g(s_j (a(cos(\theta)+1))) d\theta$$ We truncate the polynomial to $M$ terms, which is the filter order. The coefficients $c_{j,k}$ which are analytical coefficients of the filter function as desired are approximated using functions parameterized by GNNs and MLPs, as we shall see next. $f_c^s$ is the parameterized form of it, obtained in spectral module at scale s. A GNN is used to perform message passing over the input graph along with the node features $v$ at layer $l$ for neighborhood $\mathcal{N}$. $$\begin{equation*}
|
| 44 |
+
v_{im}^{l} = A_f({v_j^l | v_j \in \mathcal{N}(v_i)}), \ \ \ \ v_{i}^{l+1} = U_f(v_{i}^{l},v_{im}^{l})
|
| 45 |
+
\end{equation*}$$ Here, $A_f, U_f$ are the aggregation and update functions, respectively. The update function could contain a non-linearity such as leaky ReLU for better expressivity of the learned function. Since we intend to learn the filter coefficients $f_c \in R^M$ for the concerned graph($G$), we apply a pooling layer to get an intermediate vector representation($v_G \in R^{d_1}$) from the output of the GNN. The pooling layer converts a set of vectors(one for each node of the graph) to a single vector representation. For the final filter coefficients $f_c$, we apply a two-layer MLP with activation($\sigma$) to $v_g$ $$\begin{equation}
|
| 46 |
+
f_c = W_2 \ \ \sigma \left( W_1 v_g \right) \\
|
| 47 |
+
\end{equation}$$ where $W_1 \in R^{d_2 \times d_1}, W_2 \in R^{d_2 \times M}$ are learnable weights. Since the two-layer MLP is a universal approximator [@hornik1991approximation] we can be assured of the existence of a function in this space that learns the desired mapping to the filter coefficients. In principle, any message passing GNN can be used to perform the update and aggregation steps. This process can be repeated with multiple GNNs for learning multiple filter functions. As we consider dynamic graphs, we would like to evolve the parameters of the GNN with time (Dynamic Parameter learning Module of Figure [2](#fig:overall_architecture){reference-type="ref" reference="fig:overall_architecture"}, common for both spectral and spatial components). Inspired from [@evolvegcn], we use an RNN module for generating the parameters for the GNN in layer $l$ at time $t$: $$\begin{equation}
|
| 48 |
+
W_t^l = RNN( H_{t}^{l}, W_{t-1}^{l} )
|
| 49 |
+
\end{equation}$$ where $W_t^l$ and $H_{t}^{l}$ are the hidden state and input at layer $l$ and time $t$ of the RNN. In the below pseudo code, we outline our method to evolve the Spectral Module($ESpectral$) for dynamically learning filter coefficients per timestep
|
| 50 |
+
|
| 51 |
+
::: algorithmic
|
| 52 |
+
$W_t^{(l)} = RNN(W_{t-1}^{(l)})$ $H_t^{(l+1)} = GNN(A_t, H_t^{(l)}, W_t^{(l)})$ $v_{g_t} = Pool(A_t, H_t^{(l+1)})$ $f_{c_t} = W_2 \ \ \sigma \left( W_1 v_{g_t} \right)$
|
| 53 |
+
:::
|
| 54 |
+
|
| 55 |
+
After learning GNN parameters, we need to learn filter coefficients for evolving graph. Learning the filter coefficients $f_c^s$ at a given timestep, we can obtain the wavelet operator $g(L)$ at scale $s=1$ using equation [\[eq_chebyshev_approx\]](#eq_chebyshev_approx){reference-type="ref" reference="eq_chebyshev_approx"}. For operators at a given scale $s > 0$, we could learn different parameterizations of the filter function at every scale. Note that approximating the functions at different scales in this manner would incur a storage cost of $\mathcal{O}(JN)$ for the filter coefficients. Along with this the storage and computation complexity would increase $J$ times for dynamically computing the filter coefficients from the GNNs. Thus we propose a "rescaling trick" wherein to obtain the operator at any scale $s > 0$, we perform the appropriate change of variables in equation [\[eq_chebyshev_approx\]](#eq_chebyshev_approx){reference-type="ref" reference="eq_chebyshev_approx"} to get $g(sL)$, keeping the coefficients $c_{j,k}$ fixed. That is for a scale $s$, instead of mapping $\lambda \xrightarrow[]{} g(\lambda)$ it would be mapped to $\lambda \xrightarrow[]{} g(s \lambda)$. It helps maintain parameter efficiency(by a factor of $J$) as the GNN weights(and also filter coefficients) are tied across all the scales. Note here the exact filter learned would vary since the maximum frequency is the same. Hence, a bandpass at scale \"one\" may get converted to a highpass at scale \"two\". Once we approximate $g(\lambda)$, we use it in learning the wavelet coefficients (output of spectral module) $W_f(s,n)$ as described in equation [\[wavelet_coeff_prelim\]](#wavelet_coeff_prelim){reference-type="ref" reference="wavelet_coeff_prelim"}.
|
| 56 |
+
|
| 57 |
+
Now, we give an approximation bound between the learned and desired filter function under the given framework for dynamic graphs with $N$ nodes and varying edges and signals. Please see appendix for all proofs.
|
| 58 |
+
|
| 59 |
+
::: {#lemma_spectral1 .lemma}
|
| 60 |
+
**Lemma 1**. *Consider $G^t(\lambda)$ to be the filter function at time $t$. Assume the Markov property to hold in the dynamic setting where the desired filter function($G^{t+1}(\Lambda)$) at time $t+1$ depends on the past state at time $t$($G^t(\Lambda)$). Consider this mapping between the past state and the current states to be captured by an arbitrary functional $f$ such that $G^{t+1}(\lambda) = f(G^t(\lambda_1), G^t(\lambda_2), \dots G^t(\lambda_N), \lambda)$ and we assume $f$ to be $L$ lipschitz continuous. Further, let $C_t=U_t G^t(\lambda) U_t^T \in R^{N \times N}$ represents the convolution support of the desired spectral filter and $C_t^a$ be the learnt convolution support at time $t$. Then, we have:*
|
| 61 |
+
|
| 62 |
+
*(i) $\left\lVert C_{t+1}^a - C_{t+1}\right\rVert_F \leq LN^2 \sqrt{\left\lVert C_{t}^a - C_{t} \right\rVert_F^2 + \epsilon_{ca}^2} + \epsilon_{fa}$*
|
| 63 |
+
|
| 64 |
+
*(ii) $\left\lVert C_{t+1}^a - C_{t}^a \right\rVert_F \leq \left\lVert C_{t+1} - C_{t} \right\rVert_F + 2 \sqrt{N} \epsilon_{ca}$*
|
| 65 |
+
|
| 66 |
+
*where $\epsilon_{ca}$ and $\epsilon_{fa}$ are the filter polynomial(Chebyshev) and function approximation errors and depends on the order of the polynomial, number of training samples, model size etc.*
|
| 67 |
+
:::
|
| 68 |
+
|
| 69 |
+
Above result gives us a relation between the error at times $t$ and $t+1$ and has a factor of $N^2L$. Thus it requires the filter function to be smooth($L < \frac{1}{N^2}$) for convergence because under the given Markov assumptions, the past errors could accumulate in future timesteps. This is precisely why we need a gated model like GRU/LSTM in the Dynamic Parameter Generation module that can decouple the approximation of the filter function at a given timestep from the error in previous timesteps.
|
| 70 |
+
|
| 71 |
+
@yang2022new concluded that high-frequency components of the signal on the graph get strengthened with a high correlation with each other, and the smooth signals become weak (unsmooth spectrum) i.e., the cosine similarity between the transformed signal and the low eigenvector reduces with the layers. In our setting, we illustrate that the factor by which the signal corresponding to the low-frequency component gets weakened is directly proportional to the magnitude of the frequency response at that frequency.
|
| 72 |
+
|
| 73 |
+
::: lemma
|
| 74 |
+
**Lemma 2**. *Let $G(\lambda)$ be the frequency response at frequency $\lambda$. Let $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n$ be the eigenvalues in descending order and $p_1, p_2, \dots p_n$ be the corresponding eigenvectors of the laplacian of the graph. Define $\lambda_{max}$ to be the eigenvalue at which $G(\lambda)$ is maximum. Let $C^{l}$ represent the convolution support($UG(\Lambda)U^T$) of the spectral filter at layer $l$. Then the factor by which the cosine similarity between consecutive layers dampens is $\underset{l \xrightarrow[]{} \infty}{lim} \frac{| cos(\left<C^{l+1}h, p_n\right>) |}{| cos(\left<C^{l}h, p_n\right> |} = \frac{G(\lambda_{n})}{G(\lambda_{max})}$.*
|
| 75 |
+
:::
|
| 76 |
+
|
| 77 |
+
If $\lambda_{max}$ belongs to one of the higher regions of the spectrum and $G(\lambda_n)$ is lower, then as the layers increase, the signals will lose the low-frequency information. Since the Spectral Module aims to capture the high-frequency components that may lead to an unsmooth spectrum, to resolve this, we explicitly strengthen the low-frequency components by using local neighborhood aggregation. For this, we inherit the message passing GNNs, the parameters of which are generated using the RNN module inspired from [@evolvegcn]. The below pseudo code outlines the process to evolve the spatial module($ESpatial$).
|
| 78 |
+
|
| 79 |
+
::: algorithmic
|
| 80 |
+
$W_t^{(l)} = RNN(W_{t-1}^{(l)})$ $H_t^{(l+1)} = GNN(A_t, H_t^{(l)}, W_t^{(l)})$
|
| 81 |
+
:::
|
| 82 |
+
|
| 83 |
+
**Homogeneous representation Module(HRM)**: aims to achieve homogeneous representations from the spatial and spectral properties along with time features, which is essential for its usage in downstream tasks. A straightforward way is by concatenating two representations. However, due to the dynamic nature of the graph that evolves with time, we propose a learnable module that provides a notion of distance between the representations and helps induce position/structure information. Due to graph dynamism, it is important for the features to satisfy the shift invariance property:
|
| 84 |
+
|
| 85 |
+
::: definition
|
| 86 |
+
**Definition 1** (Shift Invariance [@li2021learnable]). *Any two vectors $v_i = f_1(v_i^{'})$ and $v_j = f_1(v_j^{'})$ satisfy the shift invariance property if the inner product $\left< v_i, v_j \right>$ is a function of $v_i^{'}-v_j^{'}$ i.e. $\left< v_i, v_j \right> = f_2(v_i^{'}-v_j^{'})$.*
|
| 87 |
+
:::
|
| 88 |
+
|
| 89 |
+
$f_1$ is an arbitrary function and $f_2$ is a linear transformation of $f_1$. Above property ensures that the relative distance between two nodes is maintained in the inner product space even if their absolute positions change(for example with addition of new nodes in the graph with time). Inspired from Fourier features [@rahimi2007random; @rahimi2008weighted], for a node $i$, the spectral($v_{gi} \in R^{d_g}$) and spatial($v_{li} \in R^{d_l}$) embeddings are concatenated($\|$) along with the timestamp($t \in R^{d_t}$) information if available. Then, it is passed to an MLP. $$v_{gli} = MLP(v_{gi} \| v_{li} \| t)$$ In order to obtain Fourier features($v_{ffi}$) from above intermediate representation, we take the element-wise sine,cosine and concatenate the two as: $$\begin{align*}
|
| 90 |
+
v_{ffi} = sin(v_{gli}) \| cos(v_{gli})
|
| 91 |
+
\end{align*}$$
|
| 92 |
+
|
| 93 |
+
::: {#si_property .property}
|
| 94 |
+
**Property 1**. *The vector $v$ obtained by concatenation of the element wise sine and cosine of another vector $v^{'}$ i.e. $v = (sin(v^{'}) \| cos(v^{'}))$, satisfies the shift invariance property.*
|
| 95 |
+
:::
|
| 96 |
+
|
| 97 |
+
The property [1](#si_property){reference-type="ref" reference="si_property"} can be readily noted by observing that $cos(a-b) = cos(a)cos(b)+sin(a)sin(b)$. Taking the inner product($\left<.\right>$) of the above features for two nodes($i,j$) gives $$\begin{equation*}
|
| 98 |
+
\small
|
| 99 |
+
\left< v_{ffi}, v_{ffj} \right> = \sum cos(W_r(v_{gli} - v_{glj})) = ff_{W_r}(v_{gli} - v_{glj})
|
| 100 |
+
\end{equation*}$$ Property 1 is beneficial if these representations are used in an attention-based model such as [@vaswani_2017_attention; @velivckovic2018graph] as we get a notion of closeness(similarity) in the embedding space. The final *homogeneous representation* for node $i$ ($v_{hri}$) is obtained as: $$\begin{align*}
|
| 101 |
+
v_{hri} = W_{hr2} \ \ \sigma (W_{hr1}(sin(v_{gli}) \| cos(v_{gli})))
|
| 102 |
+
\end{align*}$$ where $W_{hr2} \in R^{d \times d}, W_{hr1} \in R^{d \times (d_g+d_l+d_t)}$ are learnable weights and $\sigma$ is the activation function.
|
| 103 |
+
|
| 104 |
+
**Aggregation(AM)**: Once we achieve homogeneous representations for the features, we can use these in the downstream task by applying a layer of MLP. However, we also perform another level of aggregation to learn effective representations. While, in principle, we could use any of the existing message passing frameworks for this aggregation, we adopt a sparse variant of the attention mechanism inspired by [@vaswani_2017_attention] for computational benefits. Specifically, consider $X \in R^{N \times d_f}$ to be the node feature learned from the spatial and spectral modules. Now we define for the $l$-th layer, $W^{l}_{Q}, W^{l}_{K} \in R^{d_{out} \times d_f}$ to be the learnable weight matrices for the query and key of the self attention respectively. We apply self attention on the transformed features followed by softmax to get the aggregation weights $w_{ij}^l = softmax(\sum_{d_k} \hat{w_{ij}^{l}})$ between nodes $i,j$, where $\hat{w_{ij}^{l}} = \frac{W^{l}_{Q} X[i]^T \odot W^{l}_{K} X[j]^T}{d_{out}}$ if nodes $i$ and $j$ are connected in the graph and 0 otherwise.
|
| 105 |
+
|
| 106 |
+
The complexity of spectral module is $\mathcal{O}(|E| + N\sum_{j=0}^{J} M_j)$(cf., appendix) where $M_j$ is the order of the polynomial of the $j$-th filter head. The spatial modules can compute the features in a $\mathcal{O}(|E| + N)$ complexity. The integration module further has two components: HRM and AM. The HRM module would have a computational complexity of $\mathcal{O}(N)$ whereas for the AM, it depends on the underlying aggregator. In our choices, it would be an $\mathcal{O}(|E| + N)$ complexity. Thus the overall computational complexity comes to $\mathcal{O}(3|E| + N(3+\sum_{j=0}^{J} M_j) ) = \mathcal{O}(|E| + N)$, for bounded degree graphs further reduces to $\mathcal{O}(N)$ for a given snapshot at time $t$.
|
2301.04791/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-12-29T16:04:22.527Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36" version="20.6.0" etag="O0b_e78l_Zt9L7GIa5wn" type="device"><diagram id="iX_VRNnVG9yaLFmzyHjV" name="Reconstruction">7V1dc9M4FP01mYEHOpYdf+SRtJSdHcow2x0WeGGcWEk8OFbWdmnh168U2/m4EhAg0VXu9qWNZVt2jk6u7rnSlQbB5fLhZZWuFjci48XA97KHQXA18H0WhEz+UyVf2pJ4FLUF8yrPuou2Bbf5V94Vel3pXZ7xeu/CRoiiyVf7hVNRlnza7JWlVSXu9y+biWL/qat03j3R2xbcTtOCa5f9k2fNoi0NAm/n8j94Pl90jx5GSXfLMu2v7grqRZqJ+52HBS8GwWUlRNN+Wj5c8kKh1wPTPuL6G2c3b1bxsjnkhsmfr26vy1cvX/vX87fN/Kb+MLp5xoJhW8/ntLjrvnP3us2XHgT55iv1cfkwV618Uefzsr5oqrSsV6Jq0iYX5UWaV6siLfnHYBCMF82ykDcw+XEl8rLh1YvP8jXrruwzr5pcgvwqnfDijahzVYM8NxFNI5Y7Fzwv5KPkiUasZGnaHU25qlEW1J94M1Uoe/IgS+sFz7ondF9KVsMfvokX27SC5C8XS95UX+Ql3Q1h2LVxR92wp8j9lggs7soWOxzYkDftyDff1L1tHvmha6Gfaq3wx60l65G/Dnkwvl/kDb9dpVN15l423X7LpPWq/cnM8gcF3HiWF8WlKES1riiYJVM+nSqYm0p84jtnJkk4DL3jwBwFAObYALMB5ROCHJEHORqigxyTBzlk6CAn9EBmgMkeOsgjeiB7AGR0Jvf2ihLI0FyM0EFm9EAeOsdknxzIG+b2IJs8ZbsgB/RATgDIATrIB0jHcwM5BiCj+8lDeoovjADIITrI9BQfjF5EETrI9BSfBnKCDrJjii/LK1lFG92rxZ0C5SR2Gh94eipQ8zjQI3P9z40QyJrURrfTIT0VqAkUdI8jpKcCtcgcugoM6alALZ6Bz2R6KjCCAgXfJtNTgRG0yeguXEhPBUZQoKDHM0J6KnDT0TkTYw4dU4HHABkqPvyOj57ii6EYQQe573kpgQz9ZHTvIqKn+LTBVXwmE1R8QIyE6C5cRFDx+c6BTE/xafFkfJtMUPFBm4w+gh0RVHzQu0BXfNH/QPHhexcEFR+cs4weT47oKT6NyehjfP0LUAIZMhndhYsJKj44MoLPZIKKD5oL9DnLMUHFB8xFjJ7iENNTfJq5wGcyQcUHhp82oU88kAkqvsg5c0FQ8QFzEaOLkZig4oNMxgeZvuLDB7lXQ5RA9pwDmaDigyCjd3wJQcXHXHPhEnqKT5sSgB4gSggqPjgXDt9cEFR8cOosvrkgqPjgaDW+d0FQ8TmXb50QVHxwEjj6aHVCUPFBkNEnt/QvRAlk55IlRwQVH3Th0Du+kUmMREWjEBLyS+2iHf17J/oTz+r1UpfP5QUsWj2s4enPy7JZ2w7bsr/zJa/lidf8Xv79SyzTcveGaK7+vyinIuNV/wLyG7Xv0J7VGr9f1rGRrc2/CvVFxyteySetV1fclr/ZFv6IJ2t6dMt4quNKtMtDysORZ6DNjEdm2mTxaOIdiTYBkFfGlZZ8A2+i0/HGpK+weHPFXebNMxNxJizLZp6JOMyLgxE/DnGGcIKzA8QxacafJE6iEadvacnK4bv236FskOg2oK+Ay7TC1VyXeZap28cVl2+VTtZVqcZcrxW7hiwcD8IrVdddI+otLfZbuxQlB9Toi+QbX6fLvFANZ/oNHIUePgxAGiSCaYHY4enYYRK7P8mO0XfZMfs4CC9Xi/yRJL/Y+RgdQ5PTckKWmNT6UW3IXLFkni6X6SNPDu1r4CRBE0+GdnliCjgclSfv5b27XPGe9PblybunTx+5c2j6Cgt/2BH5lrlzQBzlF5eB32nhthn6lfnXJWCt+nYTAL/9L8+vNyLwLiL1UaE3VM1diu0y8ge0GBQxo1m81sSaLxpEwSjIjtSPHOBsJDabeOP9/I55GH7XPHyQ9+4YhEd7cChX4n17YByFMemW4HRkOSAada72YFesqtksSBZiGDtnIfzfthASgO9ZCGkb1EYrkhODeHw1iK+eSPl6+f7RVBxsKuB0MJPrYFWemDjzjQiU2xvSHKF1GAh4O7AHjc3h9CzlycwYHo6mCZ/MjhTGcW4HGpuTbyxhDFcxwB7otZrLicRj7NExq6mcljCG0TzsSWRWMzktYeza7jNWEzmRbAV2+pvVPE4cjPF5TM+vYK5tPWM1i9MSxq7NhLSaxGkJY9c2nrGaw2kJY9f2nbGaXWgJY9e2nbGakoWEMXa2kNWMrAMwPtGmM5r9wI4TWU3SQuoH0e0HPX9O09fY/aDVFC0kXYLtz1nN0MKJKaNrP6sJWpYwdm2/Gav5WZYwdk2XWE3PsoMxnD2F7rtZzc6yxGOoS7DjRCN62k+LKaNj7Jj2OwbGrsXijMli542xZo/R+zx6Ok/LPMHW0sa8k/PGGPrH6PNajDkbZ44xnDuEzmOCGgTyGBtjZswuOG+QtYFU7E6PHTIr/8xA1qKb2B4yM86CP3OQoWuB3e0xj6AOcW6XGY+eENEENX7HR0+JaEzGjr4xj96Qk8Zk7GkBzKOv9xxgMn3Bh783h0dQ8Tm3y0xfMyWQndtlhhFUfHCXGXwmE1R8cG8O7GQGxggqPrjLDD7IBBWfc7vMMPqKzwEmExQjzu0ywwiKEQgyPpMJihHmmgvX/5QIgazNB0APEPWYEgJZmwSHz2SCfrJzu8z4BP1k53aZ8Qn6ya7lVjOf4MiIa8k4zCcoRpyb3OITFCPObeXjExQjMGh/uo5PHlZCLWu5OfdSLWh4IzKurvgP</diagram></mxfile>
|
2301.04791/main_diagram/main_diagram.pdf
ADDED
|
Binary file (41.9 kB). View file
|
|
|
2301.04791/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,214 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Wasserstein distance [@villani2008optimal; @peyre2019computational] has been widely recognized in the community of machine learning as an effective tool. For example, Wasserstein distance is used to explore clusters inside data [@ho2017multilevel], to transfer knowledge between different domains [@courty2016optimal; @damodaran2018deepjdot], to learn generative models [@arjovsky2017wasserstein; @tolstikhin2018wasserstein], to extract features from graphs [@vincent2022template], to compare datasets [@alvarez2020geometric], and many other applications. Despite being effective, Wasserstein distance is extremely expensive to compute. In particular, the computational complexity and memory complexity of Wasserstein distance in the discrete case is $\mathcal{O}(m^3 \log m)$ and $\mathcal{O}(m^2)$ respectively with $m$ is the number of supports. The computational problem becomes more severe for applications that require computing the Wasserstein distance *multiple times* on different pairs of measures. Some examples can be named: deep generative modeling [@genevay2018learning], deep domain adaptation [@bhushan2018deepjdot], comparing datasets [@alvarez2020geometric], topic modeling [@huynh2020otlda], point-cloud reconstruction [@achlioptas2018learning], and so on.
|
| 4 |
+
|
| 5 |
+
By adding entropic regularization [@cuturi2013sinkhorn], an $\varepsilon$-approximation of Wasserstein distance can be obtained in $\mathcal{O}(m^2/\varepsilon^2)$. However, this approach cannot reduce the memory complexity of $\mathcal{O}(m^2)$ due to the storage of the cost matrix. A more efficient approach is based on the closed-form solution of Wasserstein distance in one dimension which is known as sliced Wasserstein distance [@bonneel2015sliced]. Sliced Wasserstein (SW) distance is defined as the expectation of the Wasserstein distance between random one-dimensional push-forward measures from the two original measures. Thanks to the closed-form solution, SW can be solved in $\mathcal{O}(m \log_2 m)$ while having a linear memory complexity $\mathcal{O}(m)$. Moreover, SW is also better than Wasserstein distance in high-dimensional statistical inference. Namely, the sample complexity (statistical estimation rate) of SW is $\mathcal{O}(n^{-1/2})$ compared to $\mathcal{O}(n^{-1/d})$ of Wasserstein distance with $d$ is the number dimension and $n$ is the number of data samples. Due to appealing properties, SW is utilized successfully in various applications e.g., generative modeling [@deshpande2018generative; @nguyen2022revisiting; @nguyen2023hierarchical], domain adaptation [@lee2019sliced], Bayesian inference [@nadjahi2020approximate; @yi2021sliced], point-cloud representation learning [@Nguyen2021PointSetDistances; @naderializadeh2021pooling], and so on.
|
| 6 |
+
|
| 7 |
+
The downside of SW is that it treats all projections the same due to the usage of a uniform distribution over projecting directions. This choice is inappropriate in practice since there exist projecting directions that cannot discriminate two interested measures [@kolouri2018sliced]. As a solution, max sliced Wasserstein distance (Max-SW) [@deshpande2019max] is introduced by searching for the best projecting direction that can maximize the projected Wasserstein distance. Max-SW needs to use a projected sub-gradient ascent algorithm to find the "max\" slice. Therefore, in applications that need to evaluate Max-SW *multiple times* on *different pairs of measures*, the repeated optimization procedure is costly. For example, this paper focuses on point-cloud reconstruction applications where Max-SW needs to be computed between various pairs of empirical measures over a point-cloud and its reconstructed version.
|
| 8 |
+
|
| 9 |
+
To address the problem, amortized projection optimization is proposed in [@nguyen2022amortized]. As in other amortized optimization [@ruishu2017; @amos2022tutorial] (learning to learn), an amortized model is estimated to predict the best projecting direction given the two input empirical measures. The authors in [@nguyen2022amortized] propose three types of amortized models including linear model, generalized linear model, and non-linear model. The linear model assumes that the "max\" projecting direction is a linear combination of supports of two measures. The generalized linear model injects the linearity through a link function on the supports of two measures while the non-linear model uses multilayer perceptions to have more expressiveness.
|
| 10 |
+
|
| 11 |
+
Despite performing well in practice, the previous work has not explored the full potential of amortized optimization in the sliced Wasserstein setting. There are two issues in the current amortized optimization framework. Firstly, the sub-optimality of amortized optimization leads to losing the metricity of the projected distance from the predicted projecting direction. In particular, the metricity of Max-SW is only obtained at the global optimum. Therefore, using an amortized model with sub-optimal solutions cannot achieve the metricity for all pairs of measures. Losing metricity property could hurt the performance of downstream applications. Secondly, the current amortized models are not permutation invariant to the supports of two input measures and are not symmetric. The permutation-invariant and symmetry properties are vital since the "max\" projecting direction is also not changed when permuting supports of two input empirical measures and exchanging two input empirical measures. By inducing the permutation-invariance and symmetry to the amortized model, it could help to learn a better amortized model and reduce the amortization gap
|
| 12 |
+
|
| 13 |
+
In this paper, we focus on overcoming the two issues of the current amortized projection optimization framework. For metricity preservation, we propose *amortized distributional projection optimization* framework which predicts the best distribution over projecting directions. In particular, we do amortized optimization for distributional sliced Wasserstein (DSW) distance [@nguyen2021distributional] with von Mises Fisher (vMF) slicing distribution [@jupp1979maximum] instead of Max-SW. Thanks to the smoothness of vMF, the metricity can be preserved even without a zero amortization gap. For the permutation-invariance and symmetry properties, we propose to use the self-attention mechanism [@vaswani2017attention] to design the amortized model. Moreover, we utilize efficient self-attention approaches that have the computational complexity scales linearly in the number of supports including efficient attention [@shen2021efficient] and linear attention [@wang2020linformer].
|
| 14 |
+
|
| 15 |
+
**Contribution.** In summary, our contribution is two-fold:
|
| 16 |
+
|
| 17 |
+
1\. First, we introduce *amortized distributional projection amortization* framework which predicts the best location parameter for von Mises-Fisher (vMF) distribution in distributional sliced Wasserstein (DSW) distance. Due to the smoothness of vMF, the metricity is guaranteed for all pairs of measures. Moreover, we enhance amortized models by inducing inductive biases which are permutation invariance and symmetry. To improve the efficiency, we leverage two linear-complexity attention mechanisms including efficient attention [@shen2021efficient] and linear attention [@wang2020linformer] to parameterize the amortized model. Combining the above two improvements, we obtain *self-attention amortized distributional projection amortization* framework
|
| 18 |
+
|
| 19 |
+
2\. Second, we adapt the new framework to the point-clouds reconstruction problem. In particular, we want to learn an autoencoder that can reconstruct (encode and decode) all point-clouds through their latent representations. The main idea is to treat a point-cloud as an empirical measure and use sliced Wasserstein distances as the reconstruction losses. Here, amortized optimization serves as a fast way to yield informative projecting directions for sliced Wasserstein distance to discriminative all pairs of original point-cloud and reconstructed point-cloud. Empirically, we show that the self-attention amortized distributional projection amortization provides better reconstructed point-clouds on the ModelNet40 dataset [@wu20153d] than the amortized projection optimization framework and widely used distances. Moreover, on downstream tasks, the new framework also leads to higher classification accuracy on ModelNet40 and generates ShapeNet chairs with better quality.
|
| 20 |
+
|
| 21 |
+
**Organization.** The remainder of the paper is organized as follows. In Section 2, we provide backgrounds for point-cloud reconstruction and popular distances. In Section 3, we define the new amortized distributional projection optimization framework for the point-cloud reconstruction problem. Section 4 benchmarks the proposed method by extensive experiments on point-cloud reconstruction, transfer learning, and point-cloud generation. Finally, proofs of key results and extra materials are in the supplementary.
|
| 22 |
+
|
| 23 |
+
**Notation.** For any $d \geq 2$, we denote $\mathcal{U}(\mathbb{S}^{d-1})$ is the uniform measure over the unit hyper-sphere $\mathbb{S}^{d-1}:=\{\theta \in \mathbb{R}^{d}\mid ||\theta||_2^2 =1\}$. For $p\geq 1$, $\mathcal{P}_p(\mathbb{R}^d)$ is the set of all probability measures on $\mathbb{R}^d$ that have finite $p$-moments. For any two sequences $a_{n}$ and $b_{n}$, the notation $a_{n} = \mathcal{O}(b_{n})$ means that $a_{n} \leq C b_{n}$ for all $n \geq 1$, where $C$ is some universal constant. We denote $\theta \sharp \mu$ is the push-forward measures of $\mu$ through the function $f:\mathbb{R}^{d} \to \mathbb{R}$ that is $f(x) = \theta^\top x$.
|
| 24 |
+
|
| 25 |
+
# Method
|
| 26 |
+
|
| 27 |
+
We first review the point-cloud reconstruction framework in Section [2.1](#subsec:pointcloud_reconstruction){reference-type="ref" reference="subsec:pointcloud_reconstruction"}. After that, we discuss famous choices of metrics between two point-clouds in Section [2.2](#subsec:metrics){reference-type="ref" reference="subsec:metrics"}. Finally, we present an adapted definition of the amortized projection optimization framework in the point-cloud reconstruction setting in Section [2.3](#subsec:amortized_projection){reference-type="ref" reference="subsec:amortized_projection"}.
|
| 28 |
+
|
| 29 |
+
<figure id="fig:reconstruction" data-latex-placement="t">
|
| 30 |
+
<div class="center">
|
| 31 |
+
<table>
|
| 32 |
+
<tbody>
|
| 33 |
+
<tr>
|
| 34 |
+
<td style="text-align: center;"><embed src="images/Reconstruction.pdf" style="width:45.0%" /></td>
|
| 35 |
+
</tr>
|
| 36 |
+
</tbody>
|
| 37 |
+
</table>
|
| 38 |
+
</div>
|
| 39 |
+
<figcaption> <span>The reconstruction of a point-cloud <span class="math inline"><em>X</em></span> (a plane). </span> </figcaption>
|
| 40 |
+
</figure>
|
| 41 |
+
|
| 42 |
+
We denote a point-cloud of $m$ points $x_1,\ldots,x_m \in \mathbb{R}^d$ ($d \geq 1$) as $X=(x_1,\ldots,x_m) \in \mathbb{R}^{dm}$ which is a vector of a concatenation of all points in the point-cloud. We denote the set of all possible point-clouds as $\mathcal{X} \subset \mathbb{R}^{dm}$.
|
| 43 |
+
|
| 44 |
+
**Permutation invariant metric space.** Given a permutation one-to-one mapping function $\sigma: [m] \to [m]$, we have $\sigma(X) \in \mathcal{X}$ for all $X\in\mathcal{X}$. Moreover, we need a metric $\mathcal{D}:\mathcal{X}\times \mathcal{X} \to \mathbb{R}^+$ such that $\mathcal{D}(X,\sigma(X))=0$ for all $X\in\mathcal{X}$ where $\sigma(X)=(x_{\sigma(1)},\ldots,x_{\sigma(m)})$. Here, $\mathcal{D}$ is a metric, namely, it needs to satisfy the non-negativity, symmetry, triangle inequality, and identity property. The pair $(\mathcal{X},\mathcal{D})$ forms a point-cloud metric space.
|
| 45 |
+
|
| 46 |
+
**Learning representation via reconstruction.** The raw representation of point-clouds is hard to work with in applications due to the complicated metric space. Therefore, a famous approach is to map point-clouds to points in a different space e.g., Euclidean, which is easier to apply machine learning algorithms. In more detail, we want to estimate a function $f_\phi:\mathcal{X} \to \mathcal{Z}$ ($\phi \in \Phi$) where $\mathcal{Z}$ is a set that belongs to another metric space. Then, we can apply machine learning algorithms on $\mathcal{Z}$ instead of $\mathcal{X}$. The most well-known and effective way to estimate the function $f_\phi$ is through reconstruction loss. Namely, we estimate $f_\phi$ jointly with a function $g_\gamma:\mathcal{Z} \to \mathcal{X}$ ($\gamma \in \Gamma$) given a point-cloud dataset $p(X)$ (distribution over $\mathcal{X}$) by minimizing the objective: $$\begin{align}
|
| 47 |
+
\label{eq:reconstruction}
|
| 48 |
+
\min_{\phi \in \Phi,\gamma \in \Gamma }\mathbb{E}_{X \sim p(X)} \mathcal{D}(X,g_\gamma (f_\phi(X))).
|
| 49 |
+
\end{align}$$ The loss $\mathbb{E}_{X \sim p(X)} \mathcal{D}(X,g_\gamma (f_\phi(X)))$ is known as the reconstruction loss. If the reconstruction loss is 0, we have $g_\gamma = f_\phi^{-1}$ p-almost surely. Therefore, we can move from $\mathcal{X}$ to $\mathcal{Z}$ and move back from $\mathcal{Z}$ to $\mathcal{X}$ without losing information through the functions $f_\phi$ (referred as the encoder) and $g_\gamma$ (referred as the decoder). We show an illustration of the framework [@achlioptas2018learning] in Figure [1](#fig:reconstruction){reference-type="ref" reference="fig:reconstruction"}. After learning how to do the reconstruction well, other point-cloud tasks can be done using the autoencoder (the pair $(f_\phi,g_\gamma)$) e.g., shape interpolation, shape editing, shape analogy, shape completion, point-cloud classification, and point-cloud generation [@achlioptas2018learning].
|
| 50 |
+
|
| 51 |
+
We now review some famous choices of the metric $\mathcal{D}$ which are Chamfer distance [@barrow1977parametric], Wasserstein distance [@villani2008optimal], sliced Wasserstein (SW) distance [@bonneel2015sliced], and max sliced Wasserstein (Max-SW) [@deshpande2019max] distance.
|
| 52 |
+
|
| 53 |
+
**Chamfer distance.** For any two point-clouds $X$ and $Y$, the Chamfer distance is defined as follows: $\text{CD}(X, Y) =$ $$\begin{align}
|
| 54 |
+
\label{eq:chamfer}
|
| 55 |
+
\frac{1}{|X|} \sum \limits_{x \in X} \min \limits_{y \in Y} \| x - y\|_2^{2}
|
| 56 |
+
+ \frac{1}{|Y|} \sum \limits_{y \in Y} \min \limits_{x \in X} \| x - y\|_2^{2},
|
| 57 |
+
\end{align}$$ where $|X|$ denotes the number of points in $X$.
|
| 58 |
+
|
| 59 |
+
**Wasserstein distance.** Given two probability measures $\mu \in \mathcal{P}_p(\mathbb{R}^d)$ and $\nu \in \mathcal{P}_p(\mathbb{R}^d)$, the Wasserstein distance between $\mu$ and $\nu$ is defined as follows: $$\begin{align}
|
| 60 |
+
\label{eq:W}
|
| 61 |
+
\text{W}_p(\mu,\nu) = \left(\inf_{\pi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^d \times \mathbb{R}^d} \| x - y\|_p^{p} d \pi(x,y) \right)^{\frac{1}{p}}
|
| 62 |
+
\end{align}$$ where $\Pi (\mu,\nu)$ is set of all couplings whose marginals are $\mu$ and $\nu$ respectively. Since the Wasserstein distance is originally defined on probability measures space, we need to convert a point-cloud $X=(x_1,\ldots,x_m) \in \mathcal{X}$ to the corresponding empirical probability measure $P_X =\frac{1}{m}\sum_{i=1}^m \delta_{x_i} \in \mathcal{P}(\mathbb{R}^d)$. Therefore, we can use $\mathcal{D}(X,Y) = \text{W}_p(P_X,P_Y)$ for $X,Y \in \mathcal{X}$.
|
| 63 |
+
|
| 64 |
+
**Sliced Wasserstein distance.** As discussed, the Wasserstein distance is expensive to compute with the time complexity $\mathcal{O}(m^3 \log m)$ and the memory complexity $\mathcal{O}(m^2)$. Therefore, an alternative choice is sliced Wasserstein (SW) distance between two probability measures $\mu \in \mathcal{P}_p(\mathbb{R}^d)$ and $\nu\in \mathcal{P}_p(\mathbb{R}^d)$ is: $$\begin{align}
|
| 65 |
+
\label{eq:SW}
|
| 66 |
+
\text{SW}_p(\mu,\nu) = \left(\mathbb{E}_{ \theta \sim \mathcal{U}(\mathbb{S}^{d-1})} \text{W}_p^p (\theta \sharp \mu,\theta \sharp \nu)\right)^{\frac{1}{p}
|
| 67 |
+
},
|
| 68 |
+
\end{align}$$ The benefit of SW is that $\text{W}_p (\theta \sharp \mu,\theta \sharp \nu)$ has a closed-form solution which is $$\text{W}_p (\theta \sharp \mu,\theta \sharp \nu)= \left(\int_0^1 |F_{\theta\sharp\mu}^{-1}(z) - F_{\theta \sharp \nu}^{-1}(z)|^{p} dz\right)^{\frac{1}{p}},$$ with $F^{-1}$ denotes the inverse CDF function. The expectation is often approximated by Monte Carlo sampling, namely, it is replaced by the average from $\theta_1,\ldots,\theta_L$ that are drawn i.i.d from $\mathcal{U}(\mathbb{S}^{d-1})$. The computational complexity and memory complexity of SW becomes $\mathcal{O}(Lm\log_2 m)$ and $\mathcal{O}(Lm)$.
|
| 69 |
+
|
| 70 |
+
**Max sliced Wasserstein distance.** It is well-known that SW has a lot of less discriminative projections due to the uniform sampling. Therefore, max sliced Wasserstein distance is proposed to use the most discriminative projecting direction. Max sliced Wasserstein (Max-SW) distance [@deshpande2019max] between $\mu \in \mathcal{P}_p(\mathbb{R}^d)$ and $\nu\in \mathcal{P}_p(\mathbb{R}^d)$ is introduced as follows: $$\begin{align}
|
| 71 |
+
\label{eq:MaxSW}
|
| 72 |
+
\text{Max-SW}_p(\mu,\nu)=\max_{\theta \in \mathbb{S}^{d - 1}} W_p(\theta\sharp \mu,\theta \sharp \nu),
|
| 73 |
+
\end{align}$$ Max-SW is often computed by a projected sub-gradient ascent algorithm. When the projected sub-gradient ascent algorithm has $T \geq 1$ iterations, the computation complexity of Max-SW is $\mathcal{O}(T m\log_2 m )$ and the memory complexity is $\mathcal{O}(m)$. Both SW and Max-SW are applied successfully in point-cloud reconstruction [@Nguyen2021PointSetDistances].
|
| 74 |
+
|
| 75 |
+
**Amortized Optimization.** We start with the definition of amortized optimization.
|
| 76 |
+
|
| 77 |
+
::: {#def:amodel .definition}
|
| 78 |
+
**Definition 1**. For each context variable $x$ in the context space $\mathcal{X}$, $\theta^\star(x)$ is the solution of the optimization problem $\theta^\star(x) = \argmin_{\theta \in \Theta} \mathcal{L}(\theta,x)$, where $\Theta$ is the solution space. A parametric function $f_\psi: \mathcal{X} \to \Theta$, where $\psi \in \Psi$, is called an amortized model if $$\begin{align}
|
| 79 |
+
\label{eq:famortized}
|
| 80 |
+
f_\psi (x) \approx \theta^\star (x), \quad \forall x \in \mathcal{X}.
|
| 81 |
+
\end{align}$$ The amortized model is trained by the amortized optimization objective which is defined as: $$\begin{align}
|
| 82 |
+
\label{eq:amortizedobjective}
|
| 83 |
+
\min_{\psi \in \Psi} \mathbb{E}_{x \sim p(x)} \mathcal{L}(f_\psi(x),x),
|
| 84 |
+
\end{align}$$ where $p(x)$ is a probability measure on $\mathcal{X}$ which measures the "importance\" of optimization problems.
|
| 85 |
+
:::
|
| 86 |
+
|
| 87 |
+
**Amortized Projection Optimization.** We now revisit the point-cloud reconstruction objective with $\mathcal{D}(X,Y)=\text{Max-SW}_p(P_X,P_Y)$: $$\begin{align}
|
| 88 |
+
\label{eq:reconstruction_max}
|
| 89 |
+
\min_{\phi \in \Phi,\gamma \in \Gamma }\mathbb{E} \left[\max_{\theta \in \mathbb{S}^{d-1}}\text{W}_p(\theta \sharp P_X,\theta \sharp P_{g_\gamma (f_\phi(X))})\right],
|
| 90 |
+
\end{align}$$ where the expectation is with respect to $X \sim p(X)$. For each point-cloud $X \in \mathcal{X}$, we need to compute a Max-SW distance with an iterative optimization procedure. Therefore, it is computationally expensive.
|
| 91 |
+
|
| 92 |
+
Authors in [@nguyen2022amortized] propose to use amortized optimization [@ruishu2017; @amos2022tutorial] to speed up the problem. Instead of solving all optimization problems independently, an amortized model is trained to predict optimal solutions to all problems. In greater detail, given a parametric function $a_\psi: \mathcal{X}\times \mathcal{X} \to \mathbb{S}^{d-1}$ ($\psi \in \Psi$), the amortized objective is: $$\begin{align}
|
| 93 |
+
\label{eq:amortized_reconstruction}
|
| 94 |
+
\min_{\phi \in \Phi,\gamma \in \Gamma}\max_{ \psi \in \Psi}\mathbb{E}\text{W}_p(\theta_{\psi,\gamma,\phi}\sharp P_X,\theta_{\psi,\gamma,\phi} \sharp P_{g_\gamma (f_\phi(X))}),
|
| 95 |
+
\end{align}$$ where the expectation is with respect to $X \sim p(X)$, and $\theta_{\psi,\gamma,\phi} = a_\psi(X,g_\gamma (f_\phi(X)))$. The above optimization is solved by an alternative stochastic (projected)-gradient descent-ascent algorithm. Therefore, it is faster to compute in each update iteration of $\phi$ and $\gamma$. It is worth noting that the previous work [@nguyen2022amortized] considers the generative model application which is unstable and hard to understand. Here, we adapt the framework to the point-cloud reconstruction application which is easier to explore the behavior of amortized optimization. We refer the reader to Algorithms [\[alg:trainingMaxSW\]](#alg:trainingMaxSW){reference-type="ref" reference="alg:trainingMaxSW"}-[\[alg:trainingamortizedMaxSW\]](#alg:trainingamortizedMaxSW){reference-type="ref" reference="alg:trainingamortizedMaxSW"} in Appendix [6.3](#subsec:training_algorithms){reference-type="ref" reference="subsec:training_algorithms"} for algorithms on training an autoencoder with Max-SW and amortized projection optimization.
|
| 96 |
+
|
| 97 |
+
**Amortized models.** Authors in [@nguyen2022amortized] propose three types of amortized models that are based on the literature on linear models [@christensen2002plane]. In particular, the linear amortized model is defined as:
|
| 98 |
+
|
| 99 |
+
::: {#def:linear_model .definition}
|
| 100 |
+
**Definition 2**. Given $X,Y \in \sR^{dm}$, the *linear amortized model* is defined as: $$a_\psi (X,Y) := \frac{w_0+X'w_1 + Y'w_2}{||w_0+X'w_1 + Y'w_2 ||_2},$$ where $X'$ and $Y'$ are matrices of size $d\times m$ that are reshaped from the concatenated vectors $X$ and $Y$ of size $dm$, $\psi =(w_0,w_1,w_2)$ with $w_1,w_2 \in \sR^{ m}$, and $w_0 \in \sR^d$ .
|
| 101 |
+
:::
|
| 102 |
+
|
| 103 |
+
Similarly, the generalized linear amortized model and the non-linear amortized model are defined by injecting non-linearity into the linear model. We review the definitions of the generalized linear amortized model and non-linear amortized model in Definitions [4](#def:glinear_model){reference-type="ref" reference="def:glinear_model"}-[5](#def:nonlinear_model){reference-type="ref" reference="def:nonlinear_model"} in Appendix [6.1](#subsec:additional_amortized_models){reference-type="ref" reference="subsec:additional_amortized_models"}.
|
| 104 |
+
|
| 105 |
+
**Sub-optimality.** Despite being faster, amortized optimization often cannot recover the global optimum of optimization problems. Namely, we denote $$\theta^\star (X) = \text{argmax}_{\theta \in \mathbb{S}^{d-1}} \text{W}_p(\theta \sharp P_X,\theta \sharp P_{g_\gamma (f_\phi(X))})$$ and $\psi^\star=$ $$\begin{align*}
|
| 106 |
+
\argmax_{\psi \in \Psi} \mathbb{E}_{X \in p(X)} \left[\text{W}_p(\theta_{\psi,\gamma,\phi}\sharp P_X,\theta_{\psi,\gamma,\phi} \sharp P_{g_\gamma (f_\phi(X))})\right].
|
| 107 |
+
\end{align*}$$ Then, it is well-known that the amortization gap $\mathbb{E}_{X \sim p(X)}[c(\theta^\star (X), a_{\psi^\star}(X,g_\gamma (f_\phi(X))) )] > 0$ for a metric $c: \mathbb{S}^{d-1}\times \mathbb{S}^{d-1} \to \mathbb{R}^+$. A great amortized model is one that can minimize the amortization gap. However, in the amortized projection optimization setting, we cannot obtain $\theta^\star (X)$ since the projected gradient ascent algorithm can only yield the local optimum. Therefore, a careful investigation of the amortization gap is challenging.
|
| 108 |
+
|
| 109 |
+
In this section, we propose the self-attention amortized distributional projection optimization framework. First, we present amortized distributional projection optimization to maintain the metricity property in Section [3.1](#subsec:amortized_distributional){reference-type="ref" reference="subsec:amortized_distributional"}. We then introduce self-attention amortized models which are symmetric and permutation invariant in Section [3.2](#subsec:selfattention_models){reference-type="ref" reference="subsec:selfattention_models"}.
|
| 110 |
+
|
| 111 |
+
<figure id="fig:amortized" data-latex-placement="t">
|
| 112 |
+
<div class="center">
|
| 113 |
+
<table>
|
| 114 |
+
<tbody>
|
| 115 |
+
<tr>
|
| 116 |
+
<td style="text-align: center;"><embed src="images/Amortized.pdf" style="width:45.0%" /></td>
|
| 117 |
+
</tr>
|
| 118 |
+
</tbody>
|
| 119 |
+
</table>
|
| 120 |
+
</div>
|
| 121 |
+
<figcaption> <span>The difference between amortized projection optimization and amortized distributional projection optimization. </span> </figcaption>
|
| 122 |
+
</figure>
|
| 123 |
+
|
| 124 |
+
The current amortized projection optimization framework is for predicting the "max\" projecting direction in Max-SW. However, the projected one-dimensional Wasserstein is only a metric on space of probability measure at the global optimum of Max-SW. Therefore, the local optimum from the projected sub-gradient ascent algorithm [@nietert2022statistical] and the prediction from the amortized model only yield pseudo-metricity for the projected Wasserstein.
|
| 125 |
+
|
| 126 |
+
::: {#prop:pseudo_metricity .proposition}
|
| 127 |
+
**Proposition 1**. *Let the projected one-dimensional Wasserstein be $\text{PW}_p(\mu,\nu;\hat{\theta}) = \text{W}_p (\hat{\theta} \sharp \mu,\hat{\theta} \sharp \nu))$ for any $\mu,\nu \in \mathcal{P}_p(\mathbb{R}^d)$ ($p\geq 1, d\geq 1$) and $\hat{\theta} \in \mathbb{S}^{d-1}$ such that $\hat{\theta} \neq \argmax_{\theta \in \mathbb{S}^{d-1}}\text{W}_p(\theta \sharp \mu,\theta \sharp \nu)$ , $\text{PW}_p(\mu,\nu;\hat{\theta})$ is a pseudo metric on $\mathcal{P}_p(\mathbb{R}^d)$ since it satisfies symmetry, non-negativity, triangle inequality, $\mu=\nu$ implies $\text{PW}_p(\mu,\nu;\hat{\theta})=0$, however, $\text{PW}_p(\mu,\nu;\hat{\theta}) = 0$ does not imply $\mu=\nu$.*
|
| 128 |
+
:::
|
| 129 |
+
|
| 130 |
+
The proof for Proposition [1](#prop:pseudo_metricity){reference-type="ref" reference="prop:pseudo_metricity"} is given in Appendix [7.1](#subsec:proof:pseudo_metricity){reference-type="ref" reference="subsec:proof:pseudo_metricity"}. This result implies that the if reconstruction loss $\mathbb{E}_{X \sim p(X)} [\text{PW}_p(P_X,P_{g_\gamma(f_\phi(X))};\hat{\theta}(X))=0$, it does not imply $X=g_\gamma(f_\phi(X))$ for p-almost surely $X \in \mathcal{X}$. Therefore, a local maximum for $\max_{\theta \in \mathbb{S}^{d-1}}$ in Max-SW reconstruction (Equation [\[eq:reconstruction_max\]](#eq:reconstruction_max){reference-type="ref" reference="eq:reconstruction_max"}) and the global maximum for $\max_{\psi \in \Psi}$ in amortized Max-SW reconstruction (Equation [\[eq:amortized_reconstruction\]](#eq:amortized_reconstruction){reference-type="ref" reference="eq:amortized_reconstruction"} with a misspecified amortized model) cannot guarantee perfect reconstruction even when their objectives obtain $0$ values.
|
| 131 |
+
|
| 132 |
+
**Amortized Distributional Projection Optimization.** To overcome the issue, we propose to replace Max-SW in Equation [\[eq:reconstruction_max\]](#eq:reconstruction_max){reference-type="ref" reference="eq:reconstruction_max"} with the von Mises Fisher distributional sliced Wasserstein (v-DSW) distance [@nguyen2021improving]: $$\begin{align}
|
| 133 |
+
\label{eq:vmf_reconstruction}
|
| 134 |
+
& \min_{\phi \in \Phi,\gamma \in \Gamma} \mathbb{E}_{X\sim p(X)} \Big{[} \max_{\epsilon \in \mathbb{S}^{d-1}} \Big{(} \mathbb{E}_{\theta \sim \text{vMF}(\epsilon,\kappa)}
|
| 135 |
+
\nonumber \\
|
| 136 |
+
& \hspace{4 em} \quad \quad \text{W}_p^p(\theta \sharp P_X,\theta \sharp P_{g_\gamma (f_\phi(X))})\Big{)}^{\frac{1}{p}} \Big{]},
|
| 137 |
+
\end{align}$$ where $\text{vMF}(\epsilon,\kappa)$ is the von Mises Fisher distribution with the mean location parameter $\epsilon \in \mathbb{S}^{d-1}$ and the concentration parameter $\kappa >0$, and $\text{v-DSW}_p(\mu,\nu;\kappa) =\max_{\epsilon \in \mathbb{S}^{d-1}} \Big{(} \mathbb{E}_{\theta \sim \text{vMF}(\epsilon,\kappa)} \text{W}_p^p(\theta \sharp \mu,\theta \sharp \nu) \Big{)}^{\frac{1}{p}}$ is von Mises Fisher distributional sliced Wasserstein distance. The optimization can be solved by a stochastic projected gradient ascent algorithm with the vMF reparameterization trick. In particular, $\theta_1,\ldots,\theta_L$ ($L\geq 1$) is sampled i.i.d from $\text{vMF}(\epsilon,\kappa)$ via the reparameterized acceptance-rejection sampling [@davidson2018hyperspherical] to approximate $\nabla_{\epsilon} \mathbb{E}_{\text{vMF}(\epsilon,\kappa)}[\text{W}_p^p(\theta \sharp \mu,\theta \sharp \nu)]$ via Monte Carlo integration. We refer the reader to Section [6.2](#subsec:vMF){reference-type="ref" reference="subsec:vMF"} for more detail about the vMF distribution, its sampling algorithm, its reparameterization trick, and the stochastic gradient estimators. We present a visualization of the difference between the new amortized distributional projection optimization framework and the conventional amortized projection optimization framework in Figure [2](#fig:amortized){reference-type="ref" reference="fig:amortized"}. The corresponding amortized objective is: $$\begin{align}
|
| 138 |
+
\label{eq:amortized_vmf_reconstruction}
|
| 139 |
+
& \min_{\phi \in \Phi,\gamma \in \Gamma}\max_{ \psi \in \Psi}\mathbb{E}_{X \sim p(X)}\Big{(}\mathbb{E}_{\theta \sim \text{vMF}(\epsilon_{\psi,\gamma,\phi},\kappa)}
|
| 140 |
+
\nonumber \\
|
| 141 |
+
& \hspace{6 em} \quad \quad \text{W}_p^p(\theta \sharp P_X,\theta \sharp P_{g_\gamma (f_\phi(X))})\Big{)}^{\frac{1}{p}},
|
| 142 |
+
\end{align}$$ where $\epsilon_{\psi,\gamma,\phi} = a_\psi(X,g_\gamma (f_\phi(X)))$. The optimization is solved by an alternative stochastic (projected)-gradient descent-ascent algorithm with the vMF reparameterization.
|
| 143 |
+
|
| 144 |
+
::: {#theo:reconstruction .theorem}
|
| 145 |
+
**Theorem 1**. *For any $\epsilon \in \mathbb{S}^{d-1}$ and $0 \leq \kappa < \infty$, if $\mathbb{E}_{X \sim p(X)} \left(\mathbb{E}_{\theta \sim \text{vMF}(\epsilon,\kappa)} \text{W}_p^p(\theta \sharp P_X,\theta \sharp P_{g_\gamma (f_\phi(X))}) \right)^{\frac{1}{p}}= 0$, $X=g_\gamma (f_\phi(X))$ for p-almost surely $X \in \mathcal{X}$.*
|
| 146 |
+
:::
|
| 147 |
+
|
| 148 |
+
The proof of Theorem [1](#theo:reconstruction){reference-type="ref" reference="theo:reconstruction"} is given in Appendix [7.2](#subsec:proof:reconstruction){reference-type="ref" reference="subsec:proof:reconstruction"}. The proof is based on proving the metricity of the *non-optimal* von Mises Fisher distributional sliced Wasserstein distance (v-DSW) with the smoothness condition of the vMF distribution. It is worth noting that the proof of metricity of von Mises Fisher distributional sliced Wasserstein distance is new since the original work [@nguyen2021improving] only shows the pseudo-metricity with the global optimality condition. Theorem [1](#theo:reconstruction){reference-type="ref" reference="theo:reconstruction"} indicates that a perfect reconstruction can be obtained with a local optimum for $\max_{\epsilon \in \mathbb{S}^{d-1}}$ in v-DSW reconstruction (Equation [\[eq:vmf_reconstruction\]](#eq:vmf_reconstruction){reference-type="ref" reference="eq:vmf_reconstruction"}) and a local optimum for $\max_{\psi \in \Psi}$ in amortized v-DSW reconstruction (Equation [\[eq:amortized_vmf_reconstruction\]](#eq:amortized_vmf_reconstruction){reference-type="ref" reference="eq:amortized_vmf_reconstruction"}).
|
| 149 |
+
|
| 150 |
+
**Comparison with SW and Max-SW:** When $\kappa \to 0$, the vMF distribution converges weakly to the uniform distribution over the unit hypersphere. Hence, we can get back the conventional sliced Wasserstein reconstruction in both Equation [\[eq:vmf_reconstruction\]](#eq:vmf_reconstruction){reference-type="ref" reference="eq:vmf_reconstruction"} and Equation [\[eq:amortized_vmf_reconstruction\]](#eq:amortized_vmf_reconstruction){reference-type="ref" reference="eq:amortized_vmf_reconstruction"}. When $\kappa \to \infty$, vMF distribution converges weakly to the Dirac delta at the location parameter. Therefore, we obtain Max-SW reconstruction and amortized Max-SW reconstruction in Equation [\[eq:vmf_reconstruction\]](#eq:vmf_reconstruction){reference-type="ref" reference="eq:vmf_reconstruction"} and Equation [\[eq:amortized_vmf_reconstruction\]](#eq:amortized_vmf_reconstruction){reference-type="ref" reference="eq:amortized_vmf_reconstruction"}, respectively. However, when $0 <\kappa<\infty$, v-DSW reconstruction and amortized v-DSW reconstruction can find a region of discriminative projecting directions while preserving the metricity for perfect reconstruction.
|
| 151 |
+
|
| 152 |
+
<figure id="fig:invariance" data-latex-placement="t">
|
| 153 |
+
<div class="center">
|
| 154 |
+
<table>
|
| 155 |
+
<tbody>
|
| 156 |
+
<tr>
|
| 157 |
+
<td style="text-align: center;"><embed src="images/Invariance.pdf" style="width:45.0%" /></td>
|
| 158 |
+
</tr>
|
| 159 |
+
</tbody>
|
| 160 |
+
</table>
|
| 161 |
+
</div>
|
| 162 |
+
<figcaption> <span>Visualization of an amortized model that is not symmetric and permutation invariant in two dimensions. </span> </figcaption>
|
| 163 |
+
</figure>
|
| 164 |
+
|
| 165 |
+
We now discuss the parameterization of the amortized model for amortized optimization.
|
| 166 |
+
|
| 167 |
+
**Permutation Invariance and Symmetry.** Let $X$ and $Y$ be two point-clouds, the optimal slicing distribution $\text{vMF}(\epsilon^\star,\kappa)$ of v-DSW between $P_X$ and $P_Y$ can be obtained by running Algorithm [\[alg:trainingvDSW\]](#alg:trainingvDSW){reference-type="ref" reference="alg:trainingvDSW"} in Appendix [6.3](#subsec:training_algorithms){reference-type="ref" reference="subsec:training_algorithms"}. Clearly, $\text{vMF}(\epsilon^\star,\kappa)$ is invariant to the permutation of the supports since $P_{\sigma(X)} = P_X$ and $P_{\sigma(Y)} = P_Y$ for a permutation function $\sigma$. Moreover, the optimal slicing distribution $\text{vMF}(\epsilon^\star,\kappa)$ is also unchanged when we exchange $P_X$ and $P_Y$ since v-DSW is symmetric. However, the current amortized models (see Definition [2](#def:linear_model){reference-type="ref" reference="def:linear_model"}, Definitions [4](#def:glinear_model){reference-type="ref" reference="def:glinear_model"}-[5](#def:nonlinear_model){reference-type="ref" reference="def:nonlinear_model"} in Appendix [6.1](#subsec:additional_amortized_models){reference-type="ref" reference="subsec:additional_amortized_models"}) are not permutation invariant and symmetric, namely, $a_\psi (X,Y) \neq a_\psi (X,\sigma(Y))$ and $a_\psi (X,Y)\neq a_\psi (Y,X)$ . Therefore, the current amortized models could be strongly misspecified. We show a visualization of an amortized model that is not symmetric and permutation invariant in Figure [3](#fig:invariance){reference-type="ref" reference="fig:invariance"}. To address the issue, we propose amortized models that are symmetric and permutation invariant based on the self-attention mechanism.
|
| 168 |
+
|
| 169 |
+
**Self-Attention Mechanism.** Attention is well-known for its effectiveness in learning long-range dependencies when data are sequences such as text [@devlin-etal-2019-bert; @liu2019roberta; @brown2020language] or speech [@li2019improving; @wang2020transformer]. This mechanism was then successfully generalized to other data types including image [@carion2020end; @dosovitskiy2020image], video [@sun2019videobert], graph [@dwivedi2021generalization], point-cloud [@zhao2021point; @guo2021pct], to name a few. We now revisit the attention mechanism [@vaswani2017attention]. Given $Q, K \in \sR^{m \times d_k}, V \in \sR^{m \times d_v}$, the *scaled dot-product attention* operator is defined as: $$\begin{align}
|
| 170 |
+
\label{eq:scaled_dot_product_attention}
|
| 171 |
+
\mathrm{Att}(Q, K, V) = \text{softmax}_{\rm{row}} \left[ \frac{QK^T}{\sqrt{d_k}} \right] V
|
| 172 |
+
\end{align}$$ where $\text{softmax}_{\rm{row}}$ denotes the row-wise softmax function. In the self-attention mechanism, the query matrix Q, the key matrix K, and the value matrix V are usually computed by projecting the input sequence X into different subspaces. Thus, the self-attention mechanism is given as follows. Given $X \in \sR^{m \times d}$, the *self-attention* operator is: $$\begin{align}
|
| 173 |
+
\label{eq:self_attention}
|
| 174 |
+
\gA_{\zeta}(X) = \mathrm{Att}(XW_q, X W_k, X W_v)
|
| 175 |
+
\end{align}$$ where $W_q, W_k \in \sR^{d \times d_k}, W_v \in \sR^{d \times d_v}$ and $\zeta = (W_q, W_k, W_v)$. The self-attention operator is infamous for its quadratic memory and computational costs. In particular, given an input sequence of length $m$, both the time and space complexity are $\gO(m^2)$. Since we focus on the sliced Wasserstein setting where the computational complexity should be at most $\mathcal{O}(m\log m)$, the conventional self-attention is not appropriate. Several works [@li2020linear; @katharopoulos2020transformers; @wang2020linformer; @shen2021efficient] have been proposed to reduce the overall complexity from $\gO(m^2)$ to $\gO(m)$. In this paper, we utilize two linear complexity variants of attention which are efficient attention [@shen2021efficient] and linear attention [@wang2020linformer]. Given $X \in \sR^{m \times d}$, the *efficient self-attention* is defined as: $$\begin{align}
|
| 176 |
+
\label{eq:efficient_self_attention}
|
| 177 |
+
&\gE\gA_{\zeta}(X) =\nonumber \\ & \text{softmax}_{\rm{row}}(X W_q) \left[ \text{softmax}_{\rm{col}}(X W_k)^T (X W_v) \right]
|
| 178 |
+
\end{align}$$ where $W_q, W_k \in \sR^{d \times d_k}, W_v \in \sR^{d \times d_v}$, $\zeta = (W_q, W_k, W_v)$, and $\text{softmax}_{\rm{col}}$ denotes applying the softmax function column-wise. The *linear self-attention* is: $$\begin{align}
|
| 179 |
+
\label{eq:linear_self_attention}
|
| 180 |
+
\gL\gA_{\zeta}(X) = \mathrm{Att}(X W_q, W_{k1} X W_{k2}, W_{v1} X W_{v2})
|
| 181 |
+
\end{align}$$ where $W_q, W_{k2} \in \sR^{d \times d_k}, W_{v2} \in \sR^{d \times d_v}$, $W_{k1}, W_{v1} \in \sR^{k \times n}$, and $\zeta = (W_q, W_{k1}, W_{k2}, W_{v1}, W_{v2})$. The projected dimension $k$ is chosen such that $m \gg k$ to reduce the memory and space consumption significantly.
|
| 182 |
+
|
| 183 |
+
**Self-Attention Amortized Models:** Based on the self-attention mechanism, we introduce the self-attention amortized model which is permutation invariant and symmetric. Formally, the *self-attention amortized model* is defined as:
|
| 184 |
+
|
| 185 |
+
::: {#def:efficient_attention_model .definition}
|
| 186 |
+
**Definition 3**. Given $X,Y \in \sR^{dm}$, the *self-attention amortized model* is defined as: $$\begin{align}
|
| 187 |
+
a_\psi (X,Y)=\frac{\gA_{\zeta}(X'^\top)^\top \vone_{m} + \gA_{\zeta}(Y'^\top)^\top \vone_{m}}{||\gA_{\zeta}(X'^\top)^\top \vone_{m} + \gA_{\zeta}(Y'^\top)^\top \vone_{m}||_2},
|
| 188 |
+
\end{align}$$ where $X'$ and $Y'$ are matrices of size $d\times m$ that are reshaped from the concatenated vectors $X$ and $Y$ of size $dm$, $\vone_{m}$ is the $m$-dimensional vector whose all entries are $1$ and $\psi =(\zeta)$.
|
| 189 |
+
:::
|
| 190 |
+
|
| 191 |
+
By replacing the conventional self-attention with the linear self-attention and the efficient self-attention, we obtain the *linear self-attention amortized model* and the *efficient self-attention amortized model*.
|
| 192 |
+
|
| 193 |
+
::: {#prop:invariance .proposition}
|
| 194 |
+
**Proposition 2**. *Self-attention amortized models are symmetric and permutation invariant.*
|
| 195 |
+
:::
|
| 196 |
+
|
| 197 |
+
The proof of Proposition [2](#prop:invariance){reference-type="ref" reference="prop:invariance"} is given in Appendix [7.3](#subsec:proof:invariance){reference-type="ref" reference="subsec:proof:invariance"}. The symmetry follows directly from the definition of the self-attention amortized models. The permutation invariance is proved by showing that the self-attention operators combined with average pooling are permutation invariant.
|
| 198 |
+
|
| 199 |
+
**Comparison with Set Transformer.** The authors in [@lee2019set] also proposed a method to guarantee the permutation invariant of sets. There are two main differences between our works and theirs. Firstly, Set Transformer introduced a new attention mechanism and a new Transformer architecture while we only present an approach to apply *any* attention mechanism to preserve the permutation invariance property of amortized models. Secondly, Set Transformer maintains the permutation invariance property by using a learnable multi-head attention as the aggregation scheme. We instead still rely on average pooling, a conventional permutation invariant aggregation scheme, to accumulate features learned by self-attention operations. Nevertheless, our works are orthogonal to Set Transformer, in other words, it is possible to apply techniques in Set Transformer to our attention-based amortized models. We leave this investigation for future work.
|
| 200 |
+
|
| 201 |
+
[]{#table:short_reconstruction_result label="table:short_reconstruction_result"}
|
| 202 |
+
|
| 203 |
+
<figure id="fig:reconstructed_point_clouds_short" data-latex-placement="!t">
|
| 204 |
+
<div class="center">
|
| 205 |
+
<table>
|
| 206 |
+
<tbody>
|
| 207 |
+
<tr>
|
| 208 |
+
<td style="text-align: center;"><img src="images/reconstruction_short_shapenetcore55.jpg" style="width:100.0%" alt="image" /></td>
|
| 209 |
+
</tr>
|
| 210 |
+
</tbody>
|
| 211 |
+
</table>
|
| 212 |
+
</div>
|
| 213 |
+
<figcaption> <span>Qualitative results of reconstructing point-clouds in the ShapeNet Core-55 dataset. From top to bottom, the point-clouds are input, SW, Max-SW (T = 50), v-DSW (T = 50), and <span class="math inline">$\gL\gA$</span>v-DSW respectively.</span> </figcaption>
|
| 214 |
+
</figure>
|
2301.13573/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-09-28T10:12:15.421Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36" etag="Oio9TXvKYThotiteKFSq" version="20.3.2" type="device"><diagram id="lqbREd_3p3CWPbLZDvFF" name="Page-1">7V3blpu4Ev0arzXz0CwkISEeu53LzGSSkznJ5DIvWbSNbSa28cF0+vL1R7LBgCQMGIHpmO6HNjItbO2tqlJVqTRC49XD69DdLN4GU285gub0YYRejCDEkFL2h7c8xi3Y3jfMQ3+6bwJpwwf/yYsbTTNuvvOn3jZ3ZxQEy8jf5BsnwXrtTaJcmxuGwX3+tlmwzD924849qeHDxF3GrTht/exPo8W+lWIzbf/N8+eL5MmHD75yk5vjjrcLdxrc75t296CXIzQOgyDav1o9jL0lH71kYPYdvSp49/BxQ28dVfkH25k5D4vv/x2Dsfv29ddXT+gmuHLibn64y7v4G8efNnpMhsCbshGJL4MwWgTzYO0uX6atN2Fwt556/Dkmu0rv+TMINqwRsMZ/vSh6jPF176KANS2i1TJ+dxaso1fuyl9ymoyDu9D3QvYh3nn38ZvxfwIYX4+DZRCyhqk3c++W7PvfeOuEPvye/Zfgn7xwtMABA8ZeL1h5UfjIbgm9pRv5P/L/58Ysmh/uO/zr+8BnPUIzpvwVtQ3b2f9XzHkbGJSa6Y/QZ+SGcy+Ku8niV94zQMCwSb67LRu9iSd1x15kvmbatCNJHcKYPydhvAc/+sI/j4Hjq6/xp+OvXzxkLx7ji6okY4OzA+XIsFq0HTYChA0o0BEZJj2RgoruAIKGg530h3bLR/hz8vF8AgxZTM7QDKBEpk9GmtknUqnkMWemFXUGWmmWRNTUJ4aEvrojy1c/RGuInj6/xbdfVs6W0iv47HSiWUiWLYMgymrFvI5cs3HKKEl++TV++O4iVZO7q7yeLFGLypHFsVDYM+PIjRQpmd2AxjmS1GUEeV6EqCc9TgHS1g1PgWSwLEHKAINYlaSMLmGAf07oT7ePT6ELsDriC3LwiXwp76vG6uw6DN3HzG0bfsO2IT2/w2/gz8WHV97n4P538J+P78evZ1eQlvMzZR+n0v3Cj7wPG3cH2H3obgSm+ctlQp4RRB6YYs/eqZIw+O5l3nGIjVxyeCfxpiDWsvFCn30xL+SP8dfzmD/z0J36XsrNdbD2jtHqhxdG3sNRvhzkBMnLCYoNHPdyn7p5UKLZFxkXTyJhmigNJTRY5Yshu7nIZ2gOI/K/uyB542q7m6bX7AY2LA+7EUreZ6/m/K9hGElf7LPtu9u/I6HPhjDKQ5yHMkYhi3vc5C79+ZpdThgwTNygGw6IP3GX1/EbK3863ck3FafyMk8HyBgKIKeTOwMyUWCMNGD8/s2T/cfdh/u/3cUfXz/9fTX+iy6vYCnGM3eShzgvvSVkx+7d1uXP/Ri66+0sCFf83kuD2iLUsPPi12ELDNWUTm/MIp44erWbgna78pZJWzq1VPKWwltEZHkLiuVtG4KVK0EM5KVZBhJLMQGBhgl41LJoIGRBkZAdQda59bT/U3UOss/sb7ZFMyUDvLvd7AMcM/+Bk0ViguvR2UTFBDKh3u0sN+92cz0JSOx07dLdbuPX2+9eNFnEFxosSS28MgXjCgMDOUd5BRW8gq3xqoJd1YHdX3G91vk6rAiuErtY2bEBM+5HkRYWMKCdeU4196SuxR9QuQ33cmC7cdcapMs/knTZdzxIl0bSBct0VZiKnUoUpPIqDsuBBssBes7lgBJjq4LnuIl1OJt5ZKKctVPbuTXNrq1DRGwjL7AJMw+hHBw/l3lo9UONVxWLpKr3TrfnvtQhbzkdWRo8jihQin1JoDARatsavGuLEmQCRBz2N/cUzJ5iZuwZp1NDwyo2NDQtY0Z4vHCjkX3jjuwXw5KmBaMDYZG5KM+psy9qsEo9cSZECSF6aSzoZUGxjNVCAiSSwOLpYhLuQIG7I4UX9CFfIS+wL2ECHTAQJKwjITUAklDAKs8hMVpzHmJYJubruotfrifB9BI9xAQh0UMM2AxC8lTD0OjSQUxVGD9/k3OXSnLNM7BH5ZG70oBwkt9Ranfa2u3OZhNYlQmi1U7bDrZZC2qZAFEtA2KphEWn9hjRrZX1W0o6Bt+m4uDbpsGWYce8BAApgCBtyWxSQWbnJbIWIKSpdksxd13qkN6i54hOvJ3nSAugtggoxaWAqlIvDppZP6IV8rYGRFNEJUdeOaKqGdomohWSMAdEU0Qlb0T/5mitXIoBUcm10DtE7VrxjwFRE/Ye0Vom6sUjSpz+IzrYurUQpf1HFEmI7hICeun7e07OfBuL2DulC1e7y3wDW5X9uEta/Dayb8xMpG/gQmMnhrATqdyFofQ7g9akgGrFe+ACGLigkQvSarl/bFCtlg9sgAMbdGoJaa9C/+igWmof6PBuoIPGUINkMPaPDa1nG7W1aUKBYiHiJ4WragfFtFBGik4RaFBHTnkrCVSl3eiPaqv8AEPqkE7BgR0pRkn7kDtE5eVlW7mJT+3lJl6K5HB4WPuosulachStSwbJ0aLkMC2DZPPw4dmliLwgWfnriwD/SIEeLTJAypgv91IpZUBbXipaZRPGepokl034ePt72ZnJOUubM+TIw6YBEEXSm3LfxSiz6yKzCUP7vovKhZCyCY8KcJO2ptsziBRWBiYygIJttbdnEFPsGrOeYTZ/XtBRLW/KUBaNGzRXu5rLRgYja93oSouKS1FS99Nfn65fsqZxMPVug+D7RRChZS1mkzTvPtk5gZkNo5j+GS6kmYWdxFscVbxFwL6FhE15DbH7qQEeFrBjfex/5JXLDPNfTahK5UNsckr5EMYEANvCVLW8LcT04uPhFhZ9FKWIqjfZJNtx9AM6pH42qpXAKyopSm4cBxQgg9ptATrkCdYC1MrPUGieMkPbnKAVijEPeBbX3nGUhS5UUhYY4mpMX4Ekc8j1rIWiuCPVOWVWghanJTDllc6A6DFERdv2FMWJWlScwGweurEKIjfsu3O+cMeyuS8/5K44/Ovb7WZ3m/nmeW1Kfk5uE4sZ3SCv5KFSKTi2kdQwy+9oNJI04Cak89+++Xd9DV8vF+7v5peX7z6tlh+viiuiFpZMm+wHidMtnN/+wosVsGeb6d9flRSsS2X7eBASmpxUePxtEUS/5EKSPEeGvX7xa0lssloNt37QvRVL04IGyp7bQCRCIg1hRiXtiiVdU9qZAo1wWSybD3O49aIda3ikJRaNMTvMjzzKjcfbu1W94n8/EXGIQBzEBdJR4gCVfUuNZC3RhE/KE2NUfBp8/3klJiFe6eSZ1H4SE+UK9uS35/BXIq8sKSygfabSr+c6P0qu1Uuzhq5wwkKtgmxEXgJlar+KIOuL+KmhV7kONdV73Q41XrWKD7ncGsDwECbooKSHkkFQJTwGvaEVeNEfRoSUN5kCHWsQWCsCcUIZ2MSNUew6qVkGtiEitoAI0+xQXoy2VflVfQLnszuooRkG0OwhBqrEr2exz+CyVSuwZIPvTIczqHnVj9M8+2KhS8dnZuyg2jY568zIhSWgYJQLHbdskStL1g4nMPRfhDgyJ5Hs1elUbOAin87HwTbXBTsUPTrYyNVuP7ttrqzg2cQu7H8Kn2bD0rarGpaa0vbUOF5aUlBTeSym7SlRbC9VTw1irdNLBxBFpeo4hpwT3V52nhrDS8vmaoghFbPzqk7EFuchubRcrqYYivNQady2l5GnBvHS0rcagiieHuBUnYiakvDUIMIBxFogisZpRY2oKe1OjeGwzmx9nSni3rt1JlEFAkVf0uX4KKUjALhDiCgQq+2wpEIOAcw/xoKGpQgOduS+VJahHxIKeumylItoMIpSK8Md+QCpTv2Xygr4g17RyQHpYAtJscjWRdeKpeXAdt+SCwCQA9sOPhqM7DTITZoX0/vZznVuOAeFY53JYS51cKyzOi1QuY/mHMZcdNKxv6cZgflj3AoGJh7ybB2bgju7MiyxcI6v0GXbGaT1tkX24QjwhuJZqtpDxFNvzyuf1dsah7OVn7dtfsajlQtYVpT4OxjnrRnnu6OVz1QrqoAFzTMeB2Mwl30gxq0ZfPDc5iAYiiG3nuRPu90axi7DgE+31BhkQ7h4G0w9fsf/AQ==</diagram></mxfile>
|
2301.13573/main_diagram/main_diagram.pdf
ADDED
|
Binary file (36.5 kB). View file
|
|
|
2301.13573/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Reinforcement Learning (RL) has been incredibly effective in a variety of online scenarios such as games and continuous control environments [@li2017deep]. However, they generally suffer from sample inefficiency, where millions of interactions with an environment are required. In addition, efficient exploration is needed to avoid local minimas [@curiosity; @edl]. Because of these limitations, there is interest in methods that can learn diverse and useful primitives without supervision, enabling better exploration and re-usability of learned skills [@diyain; @disdain; @edl]. However, these online skill discovery methods still require interactions with an environment, where access may be limited.
|
| 4 |
+
|
| 5 |
+
This requirement has sparked interest in Offline RL, where a dataset of trajectories is provided. Some of these datasets [@fu2020d4rl] are composed of large and diverse trajectories of varying performance, making it non trivial to actually make proper use of these datasets; simply applying behavioral cloning (BC) leads to sub-optimal performance. Recently, approaches such as the Decision Transformer (DT) [@chen2021decisiontransformer] and the Trajectory Transformer (TT) [@tt], utilize Transformer architectures [@attention] to achieve high performance on Offline RL benchmarks. @gdt showed that these methods are effectively doing hindsight information matching (HIM), where the policies are trained to estimate a trajectory that matches given target statistics of future information. The work also generalizes DT as an information-statistic conditioned policy, Generalized Decision Transformer (GDT). This results in policies with different capabilities, such as supervised learning and State Marginal Matching (SMM) [@smm], just by simply varying different information statistics.
|
| 6 |
+
|
| 7 |
+
<figure id="fig:architecture" data-latex-placement="ht!">
|
| 8 |
+
<img src="images/copy_vqvae.drawio.png" style="width:100.0%" />
|
| 9 |
+
<figcaption>Skill Decision Transformer. States are encoded and clustered via VQ-VAE codebook embeddings. A Causal Transformer, similar to the original DT architecture, takes in a sequence of states, a latent skill distribution, represented as the normalized summed future counts of VQ-VAE encoding indices (details can be found in the "generate_histogram" function in <a href="#sssec:evaluating" data-reference-type="ref" data-reference="sssec:evaluating">8.5</a>), and the corresponding skill encoding of the state at timestep <span class="math inline"><em>t</em></span>. The skill histogram captures "future" skill behavior, while the skill embedding represents current skill behavior as timestep <span class="math inline"><em>t</em></span>.</figcaption>
|
| 10 |
+
</figure>
|
| 11 |
+
|
| 12 |
+
In the work presented here, we take inspiration from the previously mentioned skill discovery methods and introduce *Skill Decision Transformers* (Skill DT), a special case of GDT, where we wish to condition action predictions on skill embeddings and also *future* skill distributions. We show that Skill DT is not only able to discovery a number of discrete behaviors, but it is also able to effectively match target trajectory distributions. Furthermore, we empirically show that through pure unsupervised skill discovery, Skill DT is actually able to discover high performing behaviors that match or achieve higher performance on D4RL benchmarks [@fu2020d4rl] compared to other state-of-the-art offline RL approaches.
|
| 13 |
+
|
| 14 |
+
Our method is completely unsupervised and predicts actions, conditioned by previous states, skills, and distributions of future skills. Empirically, we show that Skill DT can not only perform SMM on target trajectories, but can also match or achieve higher performance on D4RL benchmarks [@fu2020d4rl] compared to other state-of-the-art offline RL approaches.
|
| 15 |
+
|
| 16 |
+
# Method
|
| 17 |
+
|
| 18 |
+
In this work, we consider learning in environments modelled as Markov decision processes (MDPs), which can be described using varibles $(S, A, P, R)$, where $S$ represents the state space, $A$ represents the action space, and $P(s_{t+1} | s_{t}, a_{t})$ represents state transition dynamics of the environment.
|
| 19 |
+
|
| 20 |
+
The Decision Transformer (DT) [@chen2021decisiontransformer] represents RL as a sequence modelling problem and uses a GPT architecture [@gpt] to predict actions autoregressively. Specifically, DT takes in a sequence of RTGs, states, and actions, where $R_t = \sum_{t}^{T}r_t$, and trajectory $\tau = (R_0, s_0, a_0, ..., R_{|\tau|}, s_{|\tau|}, a_{|\tau|})$. DT uses $K$ previous tokens to predict $a_t$ with a deterministic policy which is optimized by a mean squared error loss between target and predicted actions. For evaluation, a target return $\hat{R}_{target}$ is provided and DT attempts to achieve the targeted return in the actual environment. @gdt introduced a generalized version of DT, Generalized Decision Transformer (GDT). GDT provides a simple interface for representing a variety of different objectives, configurable by different information statistics (for consistency, we represent variations of GDT with $\pi^{gdt}$):
|
| 21 |
+
|
| 22 |
+
::: center
|
| 23 |
+
$\tau_{t}$ = $s_{t}, a_{t}, r_{t}, ..., s_{T}, a_{T}, r_{T}$, $I^{\phi}$ = information statistics function
|
| 24 |
+
:::
|
| 25 |
+
|
| 26 |
+
Generalized Decision Transformer (GDT):
|
| 27 |
+
|
| 28 |
+
::: center
|
| 29 |
+
$\pi^{gdt}(a_{t} | I^{\phi}(\tau_0), s_{0}, a_{0} ..., I^{\phi}(\tau_t), s_{t-1}, a_{t-1})$
|
| 30 |
+
:::
|
| 31 |
+
|
| 32 |
+
Decision Transformer (DT):
|
| 33 |
+
|
| 34 |
+
::: center
|
| 35 |
+
$\pi^{gdt}_{dt}(a_{t} | I_{dt}^{\phi}(\tau_0), s_{0}, a_{0}, ..., I_{dt}^{\phi}(\tau_t), s_{t-1}, a_{t-1})$, where $I_{dt}^{\phi}(\tau_t) = \sum_{t}^{T}\gamma * r_{t}$, $\gamma$ = discount factor
|
| 36 |
+
:::
|
| 37 |
+
|
| 38 |
+
Categorical Decision Transformer (CDT):
|
| 39 |
+
|
| 40 |
+
::: center
|
| 41 |
+
$\pi^{gdt}_{cdt}(a_{t} | I_{cdt}^{\phi}(\tau_0), s_{0}, a_{0}, ..., I_{cdt}^{\phi}(\tau_t), s_{t}, a_{t})$, where $I_{cdt}^{\phi}(\tau_t) = histogram(s_{t}, ..., s_{T})$
|
| 42 |
+
:::
|
| 43 |
+
|
| 44 |
+
CDT is the most similar to Skill DT -- CDT captures future trajectory information using future state distributions, represented as histograms for each state dimension, essentially binning and counting the bin ids for each state dimension. Skill DT instead utilizes learned skill embeddings to generate future skill distributions, represented as histograms of **full** embeddings. In addition, Skill DT also makes use of the representation learnt by the skill embedding by also using it in tandem with the skill distributions.
|
| 45 |
+
|
| 46 |
+
Our Skill DT architecture is very similar to the original Decision Transformer presented in @chen2021decisiontransformer. While the classic DT uses summed future returns to condition trajectories, we instead make use of learned skill embeddings and future *skill distributions*, represented as a histogram of skill embedding indices, similar to the way Categorical Decision Transformer (CDT) [@gdt] utilizes future state counts. One notable difference Skill DT has to the original Decision Transformer [@chen2021decisiontransformer] and the GDT [@gdt] variant is that we omit actions in predictions. This is because we are interested in SMM through skills, where we want to extract as much information from states.
|
| 47 |
+
|
| 48 |
+
Formally, Skill DT represents a policy: $$\pi(a_{t} |Z_{t-K}, z_{t-K}, s_{t-K}, ... Z_{t-1}, z_{t-1}, s_{t-1}),$$ where $K$ is the context length, and $\theta$ are the learnable parameters of the model. States are encoded as skill embeddings $\hat{z}_t$, which are then quantized using a learned codebook of embeddings $z = argmin_{n}||\hat{z} - z_n||^{2}_{2}$. The future skill distributions are represented as the normalized histogram of summed future one hot encoded skill indices: $Z_t = \sum_{t}^{T}one\_hot(z_{t})$. Connecting this to GDT, our policy can be viewed as:
|
| 49 |
+
|
| 50 |
+
::: center
|
| 51 |
+
$\pi^{gdt}_{skill}(a_{t} | I_{skill}^{\phi}(\tau_0), s_{0}, ..., I_{skill}^{\phi}(\tau_t), s_{t})$, where $I_{skill}^{\phi}(\tau_t) = (histogram(z_{t}, ..., z_{T}), z_{t}$).
|
| 52 |
+
:::
|
| 53 |
+
|
| 54 |
+
Hindsight experience replay (HER) is a method that has been effective in improving sample-efficiency of goal-oriented agents [@her; @hpg]. The core concept revolves around *goal relabelling*, where trajectory goals are replaced by achieved goals vs. inteded goals. This concept of re-labelling information has been utilized in a number of works [@DBLP:journals/corr/abs-1912-06088; @odt; @gogopeo], to iteratively learn an condition predictions on target statistics. Bi-Directional Decision Transformer (BDT) [@gdt], utilizes an anti-causal transformer to encode trajectory information, and passes it into a causal transformer action predictor. At every training iteration, BDT re-labels trajectory information with the anti-causal transformer. Similarly, Skill DT re-labels future skill distributions at every training iteration. Because the skill encoder is being updated consistently and skill representations change during training, the re-labelling of skill distributions is required to ensure stability in action predictions.
|
| 55 |
+
|
| 56 |
+
**VQ-VAE Skill Encoder**. Many previous works have represented discrete skills as categorical variables, sampled from a categorical distribution prior [@disdain; @diyain]. VQ-VAEs [@vqvae] have shown impressive capabilities with discrete variational inference in the space of computer vision [@visionvq; @taming], planning [@vqvaeplanning], and online skill discovery [@edl]. Because of this, we use a VQ-VAE to quantize encoded states into a set of continuous skill embeddings. We encode states into vectors $z$, and quantize to nearest skill embeddings $\hat{z}$. To ensure stability, we minimize the regularization term:
|
| 57 |
+
|
| 58 |
+
$$\begin{equation}
|
| 59 |
+
\label{eq:vqloss}
|
| 60 |
+
VQLOSS(z, \hat{z}) = MSE(z, \hat{z})
|
| 61 |
+
\end{equation}$$
|
| 62 |
+
|
| 63 |
+
Where $\hat{z}$ is the output of the MLP encoder and $z$ is the nearest embedding in the VQ-VAE codebook.
|
| 64 |
+
|
| 65 |
+
Optimizing this loss minimizes the distance of our skill encodings with their corresponding nearest VQ-VAE embeddings. This is analagous to clustering, where we are trying to minimize the distance between datapoints and their actual cluster centers. In practice, we optimize this loss using an exponential moving average, as detailed in @robustvqvae.
|
| 66 |
+
|
| 67 |
+
**Causal Transformer**. The Causal Transformer portion of Skill DT shares a similar architecture to that of the original DT [@chen2021decisiontransformer], utilizing a GPT [@gpt] model. It takes in input the last $K$ states $s_{t-K:t}$, skill encodings $z_{t-K:t}$, and future skill embedding distributions $Z_{t-K:t}$. As mentioned above, the future skill embedding distributions are calculated by generating a histogram of skill indices from timestep $t:T$, and normalizing them so that they add up to 1. For states and skill embedding distributions, we use learned linear layers to create token embeddings. To capture temporal information, we also learn a timestep embedding that is added to each token. Note that we don't tokenize our skill embeddings because we want to ensure that we don't lose important skill embedding information. It's important to note that even though we don't add timestep embeddings to the skill embeddings, they still capture temporal behavior because the attention mechanism [@attention] of the causal transformer attends the embeddings to temporally conditioned states and skill embedding distributions. The VQ-VAE and Causal Transformer components are shown visually in Fig. [1](#fig:architecture){reference-type="ref" reference="fig:architecture"}.
|
| 68 |
+
|
| 69 |
+
<figure id="fig:training_procedure" data-latex-placement="ht!">
|
| 70 |
+
<img src="images/skill_dt_training.drawio.png" style="width:80.0%" />
|
| 71 |
+
<figcaption>Training procedure for Skill Decision Transformer. Sub-trajectories of states of length <span class="math inline"><em>k</em></span> are sampled from the dataset and encoded into latents and discretized. All three variables are passed into the causal transformer to output actions. The VQ-VAE parameters and Causal Transformer parameters are backpropagated directly using an MSE loss and VQ-VAE regularization loss, shown in Equation <a href="#eq:vqloss" data-reference-type="ref" data-reference="eq:vqloss">[eq:vqloss]</a>.</figcaption>
|
| 72 |
+
</figure>
|
| 73 |
+
|
| 74 |
+
Training Skill DT is very similar to how other variants of GDT are trained (CDT, BDT, DT, etc.). First, before every training iteration we re-label skill distributions for every trajectory using our VQ-VAE encoder. Afterwards, we sample minibatches of sequence length $K$, where timesteps are sampled uniformly. Specifically, at every training iteration, we sample $\tau = (s_{t}, ... s_{t+K}, a_{t}, ... a_{t+K})$, where $t$ is sampled uniformly for each trajectory in the batch. The sampled states, $(s_{t}, ... s_{t+K})$, are encoded into skill embeddings using the VQ-VAE encoder. We then pass in the states, encoded skills, and skill distributions into the causal transformer to output actions. Like the original DT [@chen2021decisiontransformer], we also did not find it useful to predict states or skill distributions, but it could be useful for actively predicting skill distributions without having to actually provide states to encode. This is a topic we hope to explore more in the future. The VQ-VAE encoder and causal transformer are updated by backpropagation through an MSE loss between target actions and predicted actions and the VQ-VAE regularization loss referenced in Equation [\[eq:vqloss\]](#eq:vqloss){reference-type="ref" reference="eq:vqloss"}. The simplified training procedure is shown in Algorithm [\[alg:skilldt\]](#alg:skilldt){reference-type="ref" reference="alg:skilldt"}.
|
| 75 |
+
|
| 76 |
+
:::: algorithm
|
| 77 |
+
::: algorithmic
|
| 78 |
+
[]{#alg:skilldt label="alg:skilldt"}
|
| 79 |
+
|
| 80 |
+
**Initialize** offline dataset $D$, Causal Transformer $f_\theta$, VQ-VAE Encoder $e_{\phi}$, context length $K$, num updates per iteration $J$
|
| 81 |
+
|
| 82 |
+
Sample timesteps uniformly: $t \in 1, ... max\_len$
|
| 83 |
+
|
| 84 |
+
Label dataset trajectories with skill distributions $Z_{\tau_t} = \sum_{t}^{T}one\_hot(z_{t})$ for all $t,..|\tau|$
|
| 85 |
+
|
| 86 |
+
Sample batch of trajectory states: $\tau = (s_{t}, ... s_{t+K}, a_{t}, ... a_{t+K})$
|
| 87 |
+
|
| 88 |
+
$\hat{z}_{\tau_{t:t+K}} = (e_{\phi}(s_{t}), ... e_{\phi}(s_{t+K}))$ Encode skills $z_{\tau_{t:t+K}} = quantize(\hat{z}_{\tau_{t:t+K}})$ Quantize skills with VQ-VAE $\hat{a}_{\tau_{t:t+K}}$ = $f_{\theta}(Z_{\tau_t}, z_{\tau_{t}}, s_{t}, ..., Z_{\tau_{t+K}}, z_{\tau_{t+K}}, s_{t+K})$ $L_{\theta,\phi} = \frac{1}{K}\sum_{t}^{t+K}(a_{t} - \hat{a}_t)^2 + VQLOSS_{\phi}(z_{\tau_{t:t+K}}, \hat{z}_{\tau_{t:t+K}})$ backprop $L_{\theta,\phi}$ w.r.t $\theta,\phi$
|
| 89 |
+
:::
|
| 90 |
+
::::
|
2303.04488/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2023-02-20T22:00:28.919Z" agent="5.0 (X11; CrOS x86_64 14541.0.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36" etag="I8Y_6Zo2-MKQ5zdVmZve" version="20.8.20" type="google"><diagram id="jPQnGWKBsmfGr5VVc08V" name="Strona-1">7Vxbd9o4EP41OWf3AY5k+abHhGbbc9J2syU9bR8FFuDWWKxsEthfv3J8l5VgiGWczfKQYFmW7ZlPM9+MRlygyXr3npPN6hPzaHBhAG93gd5dGAaGQPxNGvZpg21lDUvue2kTLBum/j80a8y7bX2PRrWOMWNB7G/qjXMWhnQe19oI5+yh3m3BgvpdN2SZ3RGUDdM5CWij2zffi1dpq2s4ZfsH6i9X+Z2hjdMza5J3zgaOVsRjD5UmdH2BJpyxOP223k1okMgul0t63R9PnC0ejNMwbnPB/DL86+7zDGy/ff0Iru7vwpvR3chOR7knwTZ74exh430uAc62oUeTQcAFunpY+TGdbsg8OfsgVC7aVvE6EEdQfI1izn4VkkouaD5m9uT3lMd0V2nKHvs9ZWsa873okp11nEyEGYagmR0/lBox7axtVdGG6WaNJEPBshi7FJT4ksnqCLmhrgXXgZggkORkYqMhJ4hchZx0iclpCIV6Ynplh4zHK7ZkIQmuy9arutjKPh8Z22TC+knjeJ/ZCrKNWV2UQoJ8/z25fmzlhz+q597tssHTo/0hpEZsy+f0mdfMRBoTvqTxM/1gpo9ECM9qlNOAxP593QZ1rh1XgWE7EC9wtWBCBFW92X9vWX5iFD1K/lJ0gOZm9yi5/Lz4tkz+X94TPyAzcbUBbjld+5Ew4tng4lnT8dOuDYgIQ7lJvnokJlHMOO1l9jg4gUt1+lhG08xg3Jw9rqVJP/gss2fnx98r33+UE0kclVMnOXj5zIGw6xmRXXrL/EcI57qHplNXrmzz0smbXSYprniO03WZ85nzKBOMDauqT83aPGgIDXdQhjB/bh2W8FYwvIXoMI1JTNuawL6pAzbrzMGyYJM5GCqGpYs55K5Sh0ruOAmjBeNrykW3KzL/NWPhCbqBh3Wz8INgwgLGH69FiwW15/OCJ1fOeA6eAdBg0Kgz51ZnhtBq6hco9Gtr0y/Sp9/p9cfryd2T+uzoLiWzyYec8cqk/ymiUZ+FvaDKnVM1qmauZVr6UOXWUYUsBaqUVgMm/kgTsEwFsGTPG3qXSW5AHM0DEkX+vC7fuvk9NZ4onW8vvtdo6XvzmXfQ91a0aCmUmLe9kJkVCaECRM5Ywkb67g1qdngo1BhKN8uzeiMSoGl1rtcz6nl+uBwszTDqCjLcZoTVsxsaYB7HcZEUiRYhZk1ORp90zBminLAsJ6zCk9mnnDTmVAq28RqnvoPNhrJaclB9ysKdU4XC5ZsOrjj9ERiDPB5XO/7k4JZyX7wZ5S8nA86gfLyL4BgYCBcfedaOhScoPtLwbb2/a0vOxWqX4RHqJftKt03SIXrmZUz1fUokpiN2Sixyx6nDrkxY5IeKIGbqr/2AcD/eazEqUhAzI3PXQ6ogxkCmaXndmKEGSJAix9unDTJUqaduwpUiPqmkcLUsf+Ss93DaDwzKLEEIJZfknBh5OBKsYMvs8rG2BzvnsD2GNoiC4yDac0TddllvYLBuOCgsgbEtqiGQ0oYG0OVSpXnYC6xVGciO2F97eJ46FV4Aa/uVwhq6Y1z5PBEiH00brZ5oIzoHxlXJ0I5o45/co1xgXxWOykv+5Znf0uT8731QynPlxWWSCa12JFNblstQpSU78uBDzonnFuywqTMHZeowkGySXMfW1rhh1LUD7zpjbrRIwP4noYlbQjMPUQYCTQhkroaNU9drILDqYyHUGEvzgo3RIq/dJwkcmyaugXYMADoE3OSow9whaovMtkU85wl7kEzkWjNCVM9PohcH852jVuMqw9srjpFXJc9fHGOoFiZeW3FMvk7dby2MZ1HXM1Ugco0Zsu2+QKSshVGubmmshcnZ35sjV60zd3hQLqxZUHVyLUxjqN5rYZC+lY1Bgw+15U9m29r3npi9kSuoA2Yvl9sX+/n6Ap/xP0PqzrlBgOvqPD9FQhrrh79cf7n8fHPuql4KBZdxVMrGtoOINibTUPYwqMwbLevNDdlBb4KGVdaLBRik5M7pZEYxWP90xmrirc8NXCdlk44CdmLJsgeB5nOgPYxFoyUWO9/VJ+e3n0BYZ6AYYFksBI5Zd9bYbtrvXreBI435jBOXIVMf38sy5NkceVEiPpR1SFOVj+iKldNdvCWBEgecbjiNhCRJlbyV58V/sk6UGM6i5N/jPotYmGihgSM3r5+2WHDMZmtxh8xK2zVjXzX1YgzsHOIx3S4b5Euoh51D29rkDMYjMEaKH7LQGglLy6bFJrbjV7ikgj1LHqmj6hIIpNUP6IIDTyYvlzhHXpC/irKApbw8fwO2WERUixM2VYmmdLZGGxK+yKpM/eWaibuXsz8d8tUWQDecwtkroE1VquYNRHZmazY9rCQ1hJaUH7BPzhJKIzXqXLqL6ejDu0+fRlvylewm8c0PfrP8EI005pAauzUd8YjyhnENm7QalkABzudCB7du40HTOHQVOojD8sfPUo2WvyCHrv8F</diagram></mxfile>
|
2303.04488/main_diagram/main_diagram.pdf
ADDED
|
Binary file (16.1 kB). View file
|
|
|
2303.04488/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Interactive proof assistants such as Isabelle [@paulson2000isabelle] and Lean [@moura2015lean] are software tools designed to assist the development of formal proofs. They provide expressive language for the formalization of mathematical statements and proofs while verifying them formally. In Isabelle, theorems are proved sequentially: an initial *proof state* is obtained after the theorem statement is defined, and the proof state changes when the user provides a valid *proof step* (see Appendix [\[app:visualization_of_isa_env\]](#app:visualization_of_isa_env){reference-type="ref" reference="app:visualization_of_isa_env"} for an example theorem). *Proof states* contain information about the already established facts and the remaining goals to prove. *Proof steps* consist of tactics and (optionally) premises. Tactics are powerful theorem-proving decision procedures and can complete some proofs in one step provided with relevant premises [@alemi2016deepmath]. However, finding these premises is difficult: one needs to select a handful of relevant facts from the current proof context. A typical proof context can contain tens of thousands of them.
|
| 4 |
+
|
| 5 |
+
Sledgehammer [@meng2008translating; @Paulson2012ThreeYO; @smt_solvers_blanchette] is a powerful automated reasoning tool for Isabelle. Sledgehammer belongs to a broader class of tools known as "hammers", which integrate Automated Theorem Provers (ATPs) into interactive proof assistants. The goal of these tools is to support the process of finding and applying proof methods. Sledgehammer has become an indispensable tool for Isabelle practitioners [@Paulson2012ThreeYO]. It allows for closing low-level gaps between subsequent high-level steps of proof without the need to memorize entire lemma libraries or perform a manual search.
|
| 6 |
+
|
| 7 |
+
Sledgehammer is designed to select relevant facts heuristically, translate them and the conjecture to first-order logic and try to solve the conjecture using ATPs. Examples of these ATPs are E [@48], SPASS [@56], Vampire [@44], CVC5 [@cvc5], and Z3 [@33]. If successful, these external provers generate complete proofs, but the proofs are not trusted by the Isabelle system. Instead, the facts used in the external proofs are extracted and used to reconstruct the proof using native Isabelle methods. This process is known as proof reconstruction (see Figure [1](#fig:secSH){reference-type="ref" reference="fig:secSH"}). This means that, in essence, *Sledgehammer is a premise selection tool*.
|
| 8 |
+
|
| 9 |
+
<figure id="fig:secSH" data-latex-placement="!ht">
|
| 10 |
+
<div class="center">
|
| 11 |
+
<embed src="images/sh-2.pdf" />
|
| 12 |
+
</div>
|
| 13 |
+
<figcaption>A call to Sledgehammer triggers the following sequence of steps: first, all available facts are filtered based on the symbolic similarity to the conjecture, then the conjecture, together with the selected facts (usually a few hundred of them), is translated to logics used by the external provers (E, SPASS etc.). Then they are fed into each ATP separately, and for each successful external proof, Isabelle tries to reconstruct it using its native methods. </figcaption>
|
| 14 |
+
</figure>
|
2305.01738/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-05-16T00:05:55.845Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36" version="18.0.0" etag="NNJJvcLHQoWtpLd0LGF6" type="google"><diagram id="g1zmVxoN86Xf-yFs9pDf">7V1dj9o4FP01SNOHojhfJI8DM7N96ErdrbTbPnrAA9mGGAXTGfrr10nsfNlAGGwHKEhtieM4yTnn2vf62nTgTJZvf6RwtfgTz1A8sK3Z28B5GNg2sENA/8lKtkWJH1pFwTyNZqxSVfA1+oVYIa+2iWZo3ahIMI5JtGoWTnGSoClplME0xa/Nai84bt51BefsjlZV8HUKYyRU+zeakUVRGni12p9QNF/wOwOLnVlCXpk1sV7AGX6t3ct5HDiTFGNSfFu+TVCcgcdxKRp62nG2fLAUJaTLBXZxwU8Yb9i7seciW/6yP1FKIvrun+Ezir/gdUQinNBTy2g2y+qMeYX7OJo3TizIMqbHgH6l77nK2iNUEOgXzm4/XqE0WiKC0nr5l6pw/LqICPq6gtPsyldaI2uoEIM1tB16BNMpUweVhjN+iaPVP+yOcetxpxSRvFXIHrMsSDGBrNLHMGtmTVL8A01wjLNHS3CSvcxLFMe8aGA7Yf6h5QxAigF620kCKKmlNoEwfcF0S6uwC1zXGwaj4qotV7o9HHlF0WslMZ/paFFTl2MPWUXIZD0v71AxT78w8uVCcATe0Yxqnh3ilCzwHCcwfqxKKXCbZIayBjLUqjqfMV4xFv5DhGwZRXBDcFMV6C0i32rfv+fMeuzo4Y21nB9s+UFC3+xbVTE7/F4/V12WH1XXze4z08/eZoWSouQpygB5KEiHKWFP6hdn2ZFbspxB0jRfvEmnrMhlHRFM54g0rKuDElIUUw3+bLZ+Cp+uaNje5G5N//ogMN3kUWZ1Nc5U6J3346XYw6GodXskap33x6cg4/Wi9Ep9rD85Rn3cTirb+F6zGrmd7FCsSi2yS7/giLZYsRvaQ2Bb5Qc0uW5TWJgQa6M+UrWa9ayjmi3MUGg2F0f5rp304h+wF+Nq6I3ZIynozCzY12ygjdjRjdg+iQW+NmYDrcyqcx8MMHscBZ2ZtcMhAGH1McVseGP2Xb3mUcT20Bdzj7Dlr9I/683ydKe1S0Q3yT9qPFxv5HGPtvRxacgqOrmuJKBzFTi5ABwAzLihVHFbI2rLQNkfuO2wsHqcBazOBJ1qdd5eqwvboXhnu8vbbSoGBNqsTZzlOVN1XJE46JmhZ9XOgndKJRCl4lht4SkUizg1dKZiGV2TWryWWi5ELK4glrOdR9Q8H2hOSa7bheSukvHppTXhWa2G7cqPOSAfCi/c1qqtsgrr3a9Bb7zrVpUgi0bfLU/25I18B5Ws+9cdfbAJ/FAcCRqmniBpSk7IIuzMgqRoHf2Cz3lTmbAYDLRdbzzwHrK2qKBZXmOnszo43Sn1QSvHAKQuaSBxSVXMuwJfgn0LZ54moi9FIhj/jaYEJvMcxgMe/jMmBC/ZQcqePM8DoRf+leTdiVX0OCRK5pOcvKzn+Fh0F0IqqB4jqGPCc522wQYjyRQ4KFM7dTIUZHu4Em5cyLgAwCwZwY2MPWS4ZskID5Nxy4jrzogDIMmIO2Yz4nzMu7myBmc1gR0Oa9PK4aglAn1pPv6sZ5kxD1qeMY8tDKTLJf3f2efLd0jagHx9IVpqNtE1CAN+b3agd7rnstg8joTO5LYa8u2uIfV7+BSj2RufevkEI18noeICpt81B9oF+M4kBqFJEvWuKrpkEiW94Xs51NuxinMoaxRny81P8hZNp6ptV0hVS7xLWZ5aiXcpm/v4XSZk7XaIKwFe12ys3WGeY06Fu9r5nmxzBQNyUNsr0TnKD5vecs1cawg4tohAbYbupEXvklUnCo2XWurjvT/2/a5yKUgR8apPfEn0ABTowZGEvydB0eqySiA0QcQk5MlWkmvDzNaLmeOE4dOTDLPyzEmYlZNtDSt0xE5IG4CObvsrgWpB+5R/FANoAjHJvg61iFlWcG9ZipDhT2cCGVlqV6ExlriYQaxcBmIOwA752XMYDHtApkO29ALGxpFrcmzskNS8vLHRNQhgBwf9gsZG81branfudXb8Br0wV7Pr38/IadBUXdVxgN5xwKS0tDv4SpExqRnVjvxZjI+ByfDbVe3xn8UIaXQGw9Xu8+vs4s1CpTkI6GeUNBoTuFcZEwDLKIZXGRYYlaF30YGB0SHWu8rQwKzFeqqjg35mj8zqTnXc0A9mhoXWIaTgq86nmzTejlM4/ZFtfDyEXgV1jmUcrT7x7zghsjFC49pw+qKtBQu+GNDaMoyV/HpUh6BDd97c43kzPnpKd0hIEAgU5My9M0sTFIT0kjP3LiQvsAsi83Oy3oVEAAcQMzhP5F2Iv38AMYNzjv6ZufcHkJEMXtqQ6d+bb/kLL/lHEZKS3WHakBR9+jsobhO6ilWKXKDckuWL9DQtU/TFQODu+UqBdsJeke71lzusYTgyt+Uxa6q2rXbfb8M0tkGOOlN64pp46qm3F2W/d3MK7Xv3/5xHs9mjf8vjlF9WFWOpKV4+RwkkOI1gVhVO833IlId8EDo/u9ezDx7wdfF7uwIg2/yspC8QY7wXygROqan/tpy4I6cLJ4r6Z3pY/S8AhW1V/5eC8/g/</diagram></mxfile>
|
2305.01738/main_diagram/main_diagram.pdf
ADDED
|
Binary file (11.2 kB). View file
|
|
|
2305.01738/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,114 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In many real-world decision-making problems, the action space exhibits an inherent combinatorial structure. For example, in healthcare, an action may correspond to a combination of drugs and treatments. When applying reinforcement learning (RL) to these tasks, past work [\[1](#page-10-0)[–4\]](#page-10-1) typically considers each combination a distinct action, resulting in an exponentially large action space [\(Figure 1a](#page-1-0)). This is inefficient as it fails to leverage any potential independence among dimensions of the action space.
|
| 4 |
+
|
| 5 |
+
This type of factorization structure in action space could be incorporated when designing the architecture of function approximators for RL [\(Figure 1b](#page-1-0)). Similar ideas have been used in the past, primarily to improve online exploration [\[5,](#page-10-2) [6\]](#page-10-3), or to handle multiple agents [\[7](#page-10-4)[–11\]](#page-11-0) or multiple rewards [\[12\]](#page-11-1). However, the applicability of this approach has not been systematically studied, especially in offline settings and when the MDP presents no additional structure (e.g., when the state space cannot be explicitly factorized).
|
| 6 |
+
|
| 7 |
+
In this work, we develop an approach for offline RL with factored action spaces by learning linearly decomposable Q-functions. First, we study the theoretical properties of this approach, investigating the sufficient and necessary conditions for it to lead to an unbiased estimate of the Q-function (i.e., zero approximation error). Even when the linear decomposition is biased, we note that our approach
|
| 8 |
+
|
| 9 |
+
<span id="page-1-0"></span>
|
| 10 |
+
|
| 11 |
+
Figure 1: Illustration of Q-network architectures, which take the state s as input and output Q(s,a) for a selected action. In this example, the action space $\mathcal{A}$ consists of D=3 binary sub-action spaces $\{\Box, \blacksquare\}$ , $\{\Box, \blacksquare\}$ and $\{\Box, \blacksquare\}$ . (a) Learning with the combinatorial action space requires $2^3=8$ output heads (exponential in D), one for each combination of sub-actions. (b) Incorporating the linear Q decomposition for the factored action space requires $2 \times 3 = 6$ output heads (linear in D).
|
| 12 |
+
|
| 13 |
+
leads to a reduction of variance, which in turn leads to an improvement in sample efficiency. Lastly, we show that when sub-actions exhibit certain structures (e.g., when two sub-actions "reinforce" their independent effects), the linear approximation, though biased, can still lead to the optimal policy. We test our approach in offline RL domains using a simulator [13] and a real clinical dataset [2], where domain knowledge about the relationship among actions suggests our proposed factorization approach is applicable. Empirically, our approach outperforms a non-factored baseline when the sample size is limited, even when the theoretical assumptions (around the validity of a linear decomposition) are not perfectly satisfied. Qualitatively, in the real-data experiment, our approach learns policies that better capture the effect of less frequently observed treatment combinations.
|
| 14 |
+
|
| 15 |
+
Our work provides both theoretical insights and empirical evidence for RL practitioners to consider this simple linear decomposition for value-based RL approaches. Our contribution complements many popular offline RL methods focused on distribution shift (e.g., BCQ [14]) and goes beyond pessimism-only methods by leveraging domain knowledge. Compatible with any algorithm that has a Q-function component, we expect our approach will lead to gains for offline RL problems with combinatorial action spaces where data are limited and when domain knowledge can be used to check the validity of theoretical assumptions.
|
| 16 |
+
|
| 17 |
+
We consider Markov decision processes (MDPs) defined by a tuple $\mathcal{M}=(\mathcal{S},\mathcal{A},p,r,\mu_0,\gamma)$ , where $\mathcal{S}$ and $\mathcal{A}$ are the state and action spaces, p(s'|s,a) and r(s,a) are the transition and instantaneous reward functions, $\mu_0(s)$ is the initial state distribution, and $\gamma\in[0,1]$ is the discount factor. A probabilistic policy $\pi(a|s)$ specifies a mapping from each state to a probability distribution over actions. For a deterministic policy, $\pi(s)$ refers to the action with $\pi(a|s)=1$ . The state-value function is defined as $V^\pi(s)=\mathbb{E}_\pi\mathbb{E}_\mathcal{M}\left[\sum_{t=1}^\infty \gamma^{t-1}r_t \mid s_1=s\right]$ . The action-value function, $Q^\pi(s,a)$ , is defined by further restricting the action taken from the starting state. The goal of RL is to find a policy $\pi^*=\arg\max_\pi\mathbb{E}_{s\sim\mu_0}[V^\pi(s)]$ (or an approximation) that has the maximum expected performance.
|
| 18 |
+
|
| 19 |
+
While the standard MDP definition abstracts away the underlying structure within the action space $\mathcal{A}$ , in this paper, we explicitly express a factored action space as a Cartesian product of D sub-action spaces, $\mathcal{A} = \bigotimes_{d=1}^D \mathcal{A}_d = \mathcal{A}_1 \times \cdots \times \mathcal{A}_D$ . We use $\mathbf{a} \in \mathcal{A}$ to denote each action, which can be written as a vector of sub-actions $\mathbf{a} = [a_1, \dots, a_D]$ , with each $a_d \in \mathcal{A}_d$ . In general, a sub-action space can be discrete or continuous, and the cardinalities of discrete sub-action spaces are not required to be the same. For clarity of analysis and illustration, we consider discrete sub-action spaces in this paper.
|
| 20 |
+
|
| 21 |
+
The traditional factored MDP literature almost exclusively considers state space factorization [15]. In contrast, here we capitalize on action space factorization to parameterize value functions. Specifically, our approach considers a linear decomposition of the Q function, as illustrated in Figure 1b:
|
| 22 |
+
|
| 23 |
+
<span id="page-1-1"></span>
|
| 24 |
+
$$Q^{\pi}(s, \mathbf{a}) = \sum_{d=1}^{D} q_d(s, a_d).$$
|
| 25 |
+
(1)
|
| 26 |
+
|
| 27 |
+
Each component $q_d(s, a_d)$ in the summation is allowed to condition on the full state space s and only one sub-action $a_d$ . While similar forms of decomposition have been used in past work, there are key
|
| 28 |
+
|
| 29 |
+
differences in how the summation components are parameterized. In the multi-agent RL literature, each component $q_d(s_d, a_d)$ can only condition on the corresponding state space of the d-th agent [e.g., 8, 9]. The decomposition in Eqn. (1) also differs from a related form of decomposition considered by Juozapaitis et al. [12] where each component $q_d(s, \boldsymbol{a})$ can condition on the full action $\boldsymbol{a}$ . To the best of our knowledge, we are the first to consider this specific form of Q-function decomposition backed by both theoretical rigor and empirical evidence; in addition, we are the first to apply this idea to offline RL. We discuss other related work in Section 5.
|
| 30 |
+
|
| 31 |
+
In this section, we study the theoretical properties of the linear Q-function decomposition induced by factored action spaces. We first present sufficient and necessary conditions for our approach to yield unbiased estimates, and then analyze settings in which our approach can reduce variance without sacrificing policy performance when the conditions are violated. Finally, we discuss how domain knowledge may be used to check the validity of these conditions, providing examples in healthcare.
|
| 32 |
+
|
| 33 |
+
If we consider the total return of D MDPs running in parallel, where each MDP is defined by their respective state space $\mathcal{S}_d$ and action space $\mathcal{A}_d$ , then the desired linear decomposition holds for the MDP defined by the joint state space $\bigotimes_{d=1}^D \mathcal{S}_d$ and joint action space $\bigotimes_{d=1}^D \mathcal{A}_d$ (formally discussed in Appendix B.1). However, this relies on an explicit, known state space factorization, limiting its applicability. In contrast, we now present a generalization that forgoes the explicit factorization of the state space by making use of state abstractions. Intuitively, the MDP should have some implicit factorization, such that it is homomorphic to D parallel MDPs. It is, however, not a requirement that this factorization is known, as long as it exists.
|
| 34 |
+
|
| 35 |
+
<span id="page-2-0"></span>**Theorem 1.** Given an MDP defined by S, A, p, r and a policy $\pi: S \to \Delta(A)$ , where $A = \bigotimes_{d=1}^{D} A_d$ is a factored action space with D sub-action spaces, if there exists D unique corresponding state abstractions $\phi = [\phi_1, \cdots, \phi_D]$ where $\phi_d: S \to \mathcal{Z}_d$ , $z_d = \phi_d(s)$ , $z_d' = \phi_d(s')$ , such that for all s, a, s' the following holds:
|
| 36 |
+
|
| 37 |
+
<span id="page-2-3"></span><span id="page-2-1"></span>
|
| 38 |
+
$$\sum_{\tilde{s} \in \phi^{-1}(\phi(s'))} p(\tilde{s}|s, \mathbf{a}) = \prod_{d=1}^{D} p_d(z'_d|z_d, a_d)$$
|
| 39 |
+
(2)
|
| 40 |
+
|
| 41 |
+
<span id="page-2-2"></span>
|
| 42 |
+
$$r(s, \mathbf{a}) = \sum_{d=1}^{D} r_d(z_d, a_d)$$
|
| 43 |
+
(3) $\pi(\mathbf{a}|s) = \prod_{d=1}^{D} \pi_d(a_d|z_d)$ (4)
|
| 44 |
+
|
| 45 |
+
for some $p_d: \mathcal{Z}_d \times \mathcal{A}_d \to \Delta(\mathcal{Z}_d)$ , $r_d: \mathcal{Z}_d \times \mathcal{A}_d \to \mathbb{R}$ , and $\pi_d: \mathcal{Z}_d \to \Delta(\mathcal{A}_d)$ , then the Q-function of policy $\pi$ can be expressed as $Q^{\pi}(s, \mathbf{a}) = \sum_{d=1}^{D} q_d(s, a_d)$ .
|
| 46 |
+
|
| 47 |
+
In Appendix B.2, we present an induction-based proof of Theorem 1. Since every assumption is used in key steps of the proof, we conjecture that the sufficient conditions cannot be relaxed in general. Consequently, if the sufficient conditions are satisfied, then using Eqn. (1) to parameterize the Q-function leads to zero approximation error and results in an unbiased estimator. Note that this does not require knowledge of $\phi$ . To highlight the significance of Theorem 1, we present the following example, in which the state space cannot be explicitly factored, yet the linear decomposition exists (additional examples probing the sufficient conditions can be found in Appendix C).
|
| 48 |
+
|
| 49 |
+
<span id="page-2-4"></span>**Example 1** (Two-dimensional chains with abstractions). The factored action space shown in Figure 2a, $\mathcal{A} = \mathcal{A}_x \times \mathcal{A}_y$ , is the composition of two binary sub-action spaces: $\mathcal{A}_x = \{\leftarrow, \rightarrow\}$ leading the agent to move left or right, and $\mathcal{A}_y = \{\downarrow, \uparrow\}$ leading the agent to move down or up. Thus, $\mathcal{A}$ consists of four actions, where each action $\mathbf{a} = [a_x, a_y]$ leads the agent to move *diagonally*.
|
| 50 |
+
|
| 51 |
+
Consider the MDP in Figure 2b with action space $\mathcal{A}$ . The state space $\mathcal{S} = \{s_{0,0}, s_{0,1}, \tilde{s}_{0,1}, s_{1,0}, s_{1,1}\}$ contains 5 different states; subscripts indicate the abstract state vector under $\phi = [\phi_x, \phi_y]$ (e.g., $s_{0,1}$ and $\tilde{s}_{0,1}$ are two different raw states but are identical under the abstraction, $\phi(s_{0,1}) = \phi(\tilde{s}_{0,1}) = [z_{0,?}, z_{?,1}]$ ). There does not exist an explicit state space factorization such that $\mathcal{S} = \mathcal{S}_x \times \mathcal{S}_y$ . One can check that Eqns. (2) and (3) are satisfied by comparing the raw transitions and rewards against the abstracted version (e.g., action $\nearrow$ from $s_{0,0}$ moves both $\rightarrow$ (under $\phi_x$ ) and $\uparrow$ (under $\phi_y$ ) to $s_{1,1}$ and receives the sum of the two rewards, 1+1=2). For Eqn. (4) to hold, the policy must take the same action from $s_{0,1}$ and $\tilde{s}_{0,1}$ . In Figure 2c, we show the linear decomposition of the Q-function for one such policy where Theorem 1 applies, under which the evolution of the MDP can be seen as two chain MDPs running in parallel (also in Figure 2b).
|
| 52 |
+
|
| 53 |
+
<span id="page-3-0"></span>
|
| 54 |
+
|
| 55 |
+
Figure 2: (a) The composition of sub-action spaces $\mathcal{A}_x$ and $\mathcal{A}_y$ results in $\mathcal{A} = \mathcal{A}_x \times \mathcal{A}_y$ depicted by outgoing arrows exiting the corners of each state (denoted by $\square$ ). The corner from which the action exits encodes the direction. (b) An MDP with 5 states and 4 actions of the factored action space $\mathcal{A}$ . For example, action $\nearrow = [\rightarrow, \uparrow]$ from $s_{0,0}$ moves the agent both right $(\rightarrow)$ and up $(\uparrow)$ , to $s_{1,1}$ . Under abstractions $\phi = [\phi_x, \phi_y]$ , this MDP can be mapped to two abstract MDPs (with action spaces $\mathcal{A}_x$ and $\mathcal{A}_y$ , respectively). The abstract state spaces are $\mathcal{Z}_x = \{z_{0,?}, z_{1,?}\}$ and $\mathcal{Z}_y = \{z_{?,0}, z_{?,1}\}$ , respectively, where ? indicates the coordinate ignored by the abstraction. $s_{1,1}$ is an absorbing state whose outgoing transition arrows are not shown. Taking action $\nwarrow = [\leftarrow, \uparrow]$ from $s_{0,0}$ leads to $s_{0,1}$ with probability p and to $\tilde{s}_{0,1}$ with probability (1-p) (denoted in green). Actions taken by a deterministic policy $\pi$ are denoted by **bold blue** arrows. $\pi$ takes the same action $\searrow = [\rightarrow, \downarrow]$ from $s_{0,1}$ and $\tilde{s}_{0,1}$ . Nonzero rewards are denoted in red. (c) Linear decomposition of $Q^{\pi}$ for $s_{0,0}$ with respect to the factored action space $(\gamma = 0.9)$ . Similar decompositions for other states also exist (omitted for space).
|
| 56 |
+
|
| 57 |
+
In Appendix B.5, we derive a necessary condition for the linear parameterization to be unbiased. Unfortunately, the condition is not verifiable unless the exact MDP parameters are known; this highlights the non-trivial nature of the problem. One may naturally question whether the sufficient conditions (which are arguably more verifiable in practice) must hold (i.e., are necessary) for the linear parameterization to be unbiased. Perhaps surprisingly, *none* of the conditions are necessary. We state the following propositions and provide justifications through a set of counterexamples below and in Appendix C.
|
| 58 |
+
|
| 59 |
+
**Proposition 2.** There exists an MDP $\mathcal{M}$ and a policy $\pi$ for which $Q_{\mathcal{M}}^{\pi}$ decomposes as Eqn. (1) but the transition function p of $\mathcal{M}$ does not satisfy Eqn. (2).
|
| 60 |
+
|
| 61 |
+
**Proposition 3.** There exists an MDP $\mathcal{M}$ and a policy $\pi$ for which $Q_{\mathcal{M}}^{\pi}$ decomposes as Eqn. (1) but the reward function r of $\mathcal{M}$ does not satisfy Eqn. (3).
|
| 62 |
+
|
| 63 |
+
**Proposition 4.** There exists an MDP $\mathcal{M}$ and a policy $\pi$ for which $Q_{\mathcal{M}}^{\pi}$ decomposes as Eqn. (1) but the policy $\pi$ does not satisfy Eqn. (4).
|
| 64 |
+
|
| 65 |
+
**Example 2** (Modified two-dimensional chains). In Figure 3, all conditions in Theorem 1 are violated, yet for each state, there exists a linear decomposition of Q-values (see Appendix C).
|
| 66 |
+
|
| 67 |
+
<span id="page-3-1"></span>
|
| 68 |
+
|
| 69 |
+
Figure 3: This MDP is similar to Example 1 (except it does not have state $\tilde{s}_{0,1}$ ) and we consider the same abstractions $\phi = [\phi_{\mathbf{x}}, \phi_{\mathbf{y}}]$ . The Q-function and decomposition are exactly the same as in the previous example. However, none of the conditions in Theorem 1 are satisfied. (a) The transition function does not satisfy Eqn. (2) because action $\nearrow = [\rightarrow, \uparrow]$ from $s_{0,1}$ does not move right $(\rightarrow \text{ under } \phi_{\mathbf{x}})$ to $s_{1,1}$ and instead moves back to state $s_{0,1}$ . (b) The reward function does not satisfy Eqn. (3) as the reward of $(1-\gamma)$ for action $\nearrow = [\rightarrow, \uparrow]$ from $s_{0,1}$ is not the sum of +1 ( $\rightarrow \text{ from } z_{0,?}$ under $\phi_{\mathbf{x}}$ ) and 0 ( $\uparrow \text{ from } z_{?,1}$ under $\phi_{\mathbf{y}}$ ). (c) The policy does not satisfy Eqn. (4) as it takes different sub-actions from $z_{0,?}$ under $\phi_{\mathbf{x}}$ ( $\nwarrow$ from $s_{0,0}$ specifies $\leftarrow$ , whereas $\nearrow$ from $s_{0,1}$ specifies $\rightarrow$ ).
|
| 70 |
+
|
| 71 |
+
Therefore, while Theorem 1 imposes a rather stringent set of assumptions on the MDP structure (transitions, rewards) and the policy, violations of these conditions do not preclude the linear parameterization of the Q-function from being an unbiased estimator.
|
| 72 |
+
|
| 73 |
+
When the sufficient conditions do not hold perfectly, using the linear parameterization in Eqn. (1) to fit the Q-function may incur nonzero approximation error (bias). This can affect the performance of the learned policy; in Appendix B.3, we derive error bounds based on the extent of bias relative to the sufficient conditions in Theorem 1. Despite this bias, our approach always leads to a reduction in the variance of the estimator. This gives us an opportunity to achieve a better bias-variance trade-off, especially given limited historical data in the offline setting. In addition, as we will demonstrate, biased Q-values do not always result in suboptimal policy performance, and we identify the characteristics of problems where this occurs under our proposed linear decomposition.
|
| 74 |
+
|
| 75 |
+
While the amount of bias incurred depends on the problem structure, the benefit of variance reduction is immediate. Intuitively, to learn the Q-function of a tabular MDP with state space $\mathcal{S}$ and action space $\mathcal{A} = \bigotimes_{d=1}^D \mathcal{A}_d$ , the linear parameterization reduces the number of free parameters from $|\mathcal{S}||\mathcal{A}| = |\mathcal{S}|(\prod_{d=1}^D |\mathcal{A}_d|)$ to $|\mathcal{S}|(\sum_{d=1}^D |\mathcal{A}_d| - D + 1)$ (see Appendix B.4). This reduces the hypothesis class from exponential in D to linear in D. To analyze variance reduction, we compare the bounds on Rademacher complexity [16–18] of the Q-function approximator using the factored action space with that of the full combinatorial action space (formally discussed in Appendix B.6).
|
| 76 |
+
|
| 77 |
+
<span id="page-4-0"></span>**Proposition 5.** Using the linear Q-function decomposition for the factored action space in Eqn. (1) has a smaller lower bound on the empirical Rademacher complexity compared to learning the Q-function in the combinatorial action space.
|
| 78 |
+
|
| 79 |
+
Proposition 5 shows that our linear Q-function parameterization leads to a smaller function space, which implies a lower-variance estimator. Hence, our factored-action approach can make more efficient use of limited samples, leading to an interesting bias-variance trade-off that is especially beneficial for offline settings with limited data.
|
| 80 |
+
|
| 81 |
+
Even in the presence of bias, an inaccurate Q-function may still correctly identify the value-maximizing action (Proposition 6). While this statement is generally true, in this section, we identify *when* this occurs *specifically given* our linear decomposition based on factored action spaces. To focus the analysis on the most interesting aspects unique to our approach, we consider a bandit setting; extensions to the sequential RL setting are possible by applying induction similar to the proof for the main theorems (Appendices B.1 and B.2).
|
| 82 |
+
|
| 83 |
+
<span id="page-4-1"></span>**Proposition 6.** There exists an MDP with the optimal $Q^*$ and its approximation $\hat{Q}$ parameterized in the form of Eqn. (1), such that $\hat{Q} \neq Q^*$ and yet $\arg \max_{\mathbf{a}} \hat{Q}(\mathbf{a}) = \arg \max_{\mathbf{a}} Q^*(\mathbf{a})$ .
|
| 84 |
+
|
| 85 |
+
Justification. Consider a 1-step bandit problem with a single state and the same action space as before, $\mathcal{A} = \mathcal{A}_x \times \mathcal{A}_y$ . Taking an action $\mathbf{a} = [a_x, a_y]$ leads the agent to move diagonally and terminate immediately. Since there are no transitions, the Q-values of any policy are simply the immediate reward from each action, $Q(\mathbf{a}) = r(\mathbf{a})$ . We assume the reward function is defined as in Figure 4a (Appendix B.7 describes a procedure to standardize an arbitrary reward function).
|
| 86 |
+
|
| 87 |
+
<span id="page-4-2"></span>(a) (b)
|
| 88 |
+
$$\begin{array}{ccccccccccccccccccccccccccccccccccc$$
|
| 89 |
+
|
| 90 |
+
True Value Function Linear Approximation
|
| 91 |
+
|
| 92 |
+
$$\begin{array}{l} \stackrel{C}{Q^*(\swarrow)} \\ \stackrel{Q^*(\nwarrow)}{Q^*(\nwarrow)} \\ \stackrel{Q^*(\nwarrow)}{Q^*(\swarrow)} \\ \stackrel{Q^*(\searrow)}{Q^*(\swarrow)} \end{array} = \begin{bmatrix} 0 \\ \alpha \\ 1 \\ 1 + \alpha + \beta \end{bmatrix} \qquad \begin{bmatrix} \hat{Q}(\swarrow) \\ \hat{Q}(\nwarrow) \\ \hat{Q}(\searrow) \\ \hat{Q}(\nearrow) \end{bmatrix} = \begin{bmatrix} -\frac{1}{4}\beta^2 \\ \alpha + \frac{1}{4}\beta \\ 1 + \frac{1}{4}\beta \\ 1 + \alpha + \frac{3}{4}\beta \end{bmatrix}$$
|
| 93 |
+
|
| 94 |
+
Figure 4: (a) A two-dimensional bandit problem with action space $\mathcal{A}$ . Rewards are denoted for each arm. (b) Learning using the linear Q decomposition approach corresponds to a system of linear equations that relates the reward of each arm. The parameter $r_{\text{Interact}}$ is dropped in our linear approximation, leading to omitted-variable bias. (c) Solving the system results in an approximate value function $\hat{Q}$ , which does not equal to the true value function $Q^*$ unless $\beta=0$ .
|
| 95 |
+
|
| 96 |
+
Applying our approach amounts to solving for the parameters $r_{\text{Left}}$ , $r_{\text{Right}}$ , $r_{\text{Down}}$ , $r_{\text{Up}}$ of the linear system in Figure 4b, while dropping the interaction term $r_{\text{Interact}}$ , resulting in a form of omitted-variable bias [19]. Solving the system gives the approximate value function where the interaction term $\beta$ appears in the approximation $\hat{Q}$ for all arms (Figure 4c, details in Appendix B.8).
|
| 97 |
+
|
| 98 |
+
Note that $\hat{Q}=Q^*$ only when $\beta=0$ , i.e., there is no interaction between the two sub-actions. We first consider the family of problems with $\alpha=1$ and $\beta\in[-4,4]$ . In Figure 5a, we measure the value approximation error $\mathrm{RMSE}(Q^*,\hat{Q})$ , as well as the suboptimality $V^{\pi^*}-V^{\hat{\pi}}=\max_{\boldsymbol{a}}Q^*(\boldsymbol{a})-Q^*(\arg\max_{\boldsymbol{a}}\hat{Q}(\boldsymbol{a}))$ of the greedy policy defined by $\hat{Q}$ as compared to $\pi^*$ . As expected, when $\beta=0$ , $\hat{Q}$ is unbiased and has zero approximation error. When $\beta\neq0$ , $\hat{Q}$ is biased and RMSE >0; however, for $\beta\geq-1$ , $\hat{Q}$ corresponds to a policy that correctly identifies the optimal action.
|
| 99 |
+
|
| 100 |
+
We further investigate this phenomenon considering both $\alpha, \beta \in [-4, 4]$ (to show all regions with interesting trends), measuring RMSE and suboptimality in the same way as above. As shown in Figure 5b, the approximation error is zero only when $\beta = 0$ , regardless of $\alpha$ . However, in Figure 5c, for a wide range of $\alpha$ and $\beta$ settings, suboptimality is zero; this suggests that in those regions, even in the presence of bias (non-zero approximation error), our approach leads to an approximate value function that correctly identifies the optimal action. The irregular contour outlines multiple regions where this happens; one key region is when the two sub-actions affect the reward in the same direction (i.e., $\beta \geq 0$ ) and their interaction effects also affect the reward in the same direction (i.e., $\beta \geq 0$ ).
|
| 101 |
+
|
| 102 |
+
<span id="page-5-0"></span>
|
| 103 |
+
|
| 104 |
+
Figure 5: (a) The approximation error and policy suboptimality of our approach for the bandit problem in Figure 4a, for different settings of $\beta$ when $\alpha=1$ . The Q-value approximation is unbiased only when $\beta=0$ , but the corresponding approximate policy is optimal for a wider range of $\beta\geq -1$ . (b-c) The approximation error and policy suboptimality of our approach for the bandit problem in Figure 4a, for different settings of $\alpha$ and $\beta$ . The Q-value approximation is unbiased only when $\beta=0$ , but the corresponding approximate policy is optimal for a wide range of $\alpha$ and $\beta$ values. The highlighted region of zero suboptimality corresponds to $\alpha\geq 0$ and $\beta\geq 0$ .
|
| 105 |
+
|
| 106 |
+
Based on our theoretical analysis, strong assumptions (Section 3.1) on the problem structure (though not necessary, Section 3.2) are the only known way to guarantee the unbiasedness of our proposed linear approximation. It is thus crucial to understand the applicability (and inapplicability) of our approach in real-world scenarios. Exploring to what extent these assumptions hold in practice is especially important for safety-critical domains such as healthcare where incorrect actions (treatments) can have devastating consequences. Fortunately, RL tasks for healthcare are often equipped with significant domain knowledge, which serves as a better guide to inform the algorithm design than heuristics-driven reasoning alone [20, 5, 9].
|
| 107 |
+
|
| 108 |
+
Oftentimes, when clinicians treat conditions using multiple medications at the same time (giving rise to the factored action space), it is because each medication has a different "mechanism of action," resulting in negligible or limited interactions. For example, several classes of medications are used in the management of chronic heart failure, and each has a unique and incremental benefits on patient outcomes [21]. Problems such as this satisfy the sufficient conditions in Section 3.1 in spite of a non-factorized state space. Moreover, any small interactions would have a bounded effect on RL policy performance (according to Appendix B.3).
|
| 109 |
+
|
| 110 |
+
Similarly, in the management of sepsis (which we consider in Section 4.2), fluids and vasopressors affect blood pressure to correct hypotension via different mechanisms [22]. Fluid infusion increases "preload" by increasing the blood return to the heart to make sure the heart has enough blood to pump out [23]. In contrast, common vasopressors (e.g., norepinephrine) increase "inotropy" by
|
| 111 |
+
|
| 112 |
+
stimulating the heart muscle and increase peripheral vascular resistance to maintain perfusion to organs [24, 25]. Therefore, while the two treatments may appear to operate on the same part of the state space (e.g., they both increase blood pressure), in general they are not expected to interfere with each other. Recently, there has also been evidence suggesting that their combination can better correct hypotension [26], which places this problem approximately in the regime discussed in Section 3.3.2.
|
| 113 |
+
|
| 114 |
+
In offline settings with limited historical data, the benefits of a reduction in variance can outweigh any potential small bias incurred in the scenarios above and lead to overall performance improvement (Section 3.3.1). However, our approach is not suitable if the interaction is counter to the effect of the sub-actions (e.g., two drugs that raise blood pressure individually, but when combined lead to a decrease). In such scenarios, the resulting bias will likely lead to suboptimal performance (Section 3.3.2). Nevertheless, many drug-drug interactions are known and predictable [27–30]. In such cases, one can either explicitly encode the interaction terms or resort back to a combinatorial action space (Appendix B.9). While we focus on healthcare, there are other domains in which significant domain knowledge regarding the interactions among sub-actions is available, e.g., cooperative multiagent games in finance where there is a higher payoff if agents cooperate (positive interaction effects) or intelligent tutoring systems that teach basic arithmetic operations as well as fractions (which are distinct but related skills). For these problems, this knowledge can and should be leveraged.
|
2305.10855/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2305.10855/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The field of image generation has seen tremendous progress with the advent of diffusion models [@balaji2022ediffi; @feng2022ernie; @gal2022image; @gu2022vector; @ho2020denoising; @rombach2022high; @ruiz2022dreambooth; @saharia2022photorealistic; @sohl2015deep; @yang2022diffusion] and the availability of large-scale image-text paired datasets [@gu2022wukong; @schuhmann2022laion; @schuhmann2021laion]. However, existing diffusion models still face challenges in generating visually pleasing text on images, and there is currently no specialized large-scale dataset for this purpose. The ability of AI models to generate accurate and coherent text on images is crucial, given the widespread use of text images in various forms (*e.g.*, posters, book covers, memes, etc.) and the difficulty in creating high-quality text images, which typically require professional skills and numerous times of designers.
|
| 4 |
+
|
| 5 |
+
Traditional solutions to creating text images involve using image processing tools like `Photoshop` to add text onto images directly. However, these often result in unnatural artifacts due to the background's complex texture or lighting variations. Recent efforts have used diffusion models to overcome the limitations of traditional methods and enhance text rendering quality. For instance, Imagen [@saharia2022photorealistic], eDiff-I [@balaji2022ediffi], and DeepFloyd [@deepfloyd] observe diffusion models generate text better with T5 series text encoders [@raffel2020exploring] than the CLIP text encoder [@radford2021learning]. Liu et al. employ character-aware text encoders to improve text rendering [@liu2022character]. Despite some success, these models only focus on text encoders, lacking control over the generation process. A concurrent work, GlyphDraw [@ma2023glyphdraw], improves the controllability of models by conditioning on the location and structures of Chinese characters. However, GlyphDraw does not support multiple text bounding-box generation, which is not applicable to many text images such as posters and book covers.
|
| 6 |
+
|
| 7 |
+
In this paper, we propose **TextDiffuser**, a flexible and controllable framework based on diffusion models. The framework consists of two stages. In the first stage, we use a Layout Transformer to locate the coordinates of each keyword in text prompts and obtain character-level segmentation masks. In the second stage, we fine-tune the latent diffusion model by leveraging the generated segmentation masks as conditions for the diffusion process and text prompts. We introduce a character-aware loss in the latent space to further improve the quality of generated text regions. Figure [1](#fig:introduction){reference-type="ref" reference="fig:introduction"} illustrates the application of TextDiffuser in generating accurate and coherent text images using text prompts alone or text template images. Additionally, TextDiffuser is capable of performing text inpainting[^2] to reconstruct incomplete images with text. To train our model, we use OCR tools and design filtering strategies to obtain 10 million high-quality i[**ma**]{.underline}ge-text pai[**r**]{.underline}s w[**i**]{.underline}th [**O**]{.underline}CR annotations (dubbed as **MARIO-10M**), each with recognition, detection, and character-level segmentation annotations. Extensive experiments and user studies demonstrate the superiority of the proposed TextDiffuser over existing methods on the constructed benchmark **MARIO-Eval**. The code, model and dataset will be publicly available to promote future research.
|
| 8 |
+
|
| 9 |
+
# Method
|
| 10 |
+
|
| 11 |
+
As illustrated in Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}, TextDiffuser consists of two stages: Layout Generation and Image Generation. We will detail the two stages and introduce the inference process next.
|
| 12 |
+
|
| 13 |
+
In this stage, the objective is to utilize bounding boxes to determine the layout of keywords (enclosed with quotes specified by user prompts). Inspired by Layout Transformer [@gupta2020layout], we utilize the Transformer architecture to obtain the layout of keywords. Formally, we denote the tokenized prompt as $\mathcal{P} = (p_{0}, p_{1}, ..., p_{L-1})$, where $L$ means the maximum length of tokens. Following LDM [@rombach2022high], we use CLIP [@radford2021learning] and two linear layers to encode the sequence as $\text{CLIP}(\mathcal{P}) \in \mathbb{R}^{L \times d}$, where $d$ is the dimension of latent space. To distinguish the keywords against others, we design a keyword embedding $\text{Key}(\mathcal{P}) \in \mathbb{R}^{L \times d}$ with two entries (*i.e.*, keywords and non-keywords). Furthermore, we encode the width of keywords with an embedding layer $\text{Width}(\mathcal{P}) \in \mathbb{R}^{L \times d}$. Together with the learnable positional embedding $\text{Pos}(\mathcal{P}) \in \mathbb{R}^{L \times d}$ introduced in [@vaswani2017attention], we construct the whole embedding as follows: $$\begin{equation}
|
| 14 |
+
\begin{aligned}
|
| 15 |
+
\text{Embedding}(\mathcal{P}) = \text{CLIP}(\mathcal{P}) + \text{Pos}(\mathcal{P}) + \text{Key}(\mathcal{P}) + \text{Width}(\mathcal{P}).
|
| 16 |
+
\end{aligned}
|
| 17 |
+
\label{eq:combine}
|
| 18 |
+
\end{equation}$$
|
| 19 |
+
|
| 20 |
+
The embedding is further processed with Transformer-based $l$-layer encoder $\Phi_{E}$ and decoder $\Phi_{D}$ to get the bounding boxes $\mathbf{B} \in \mathbb{R}^{K \times 4}$ of $K$ key words autoregressively: $$\begin{equation}
|
| 21 |
+
\begin{aligned}
|
| 22 |
+
\mathbf{B} = \Phi_{D}(\Phi_{E}(\text{Embedding}(\mathcal{P}))) = (\mathbf{b}_{0}, \mathbf{b}_{1}, ..., \mathbf{b}_{K-1}).
|
| 23 |
+
\end{aligned}
|
| 24 |
+
\label{eq:transform}
|
| 25 |
+
\end{equation}$$
|
| 26 |
+
|
| 27 |
+
Specifically, we use positional embedding as the query for the Transformer decoder $\Phi_{D}$, ensuring that the $n$-th query corresponds to the $n$-th keyword in the prompt. The model is optimized with $l1$ loss, also denoted as $|\mathbf{B}_{GT}-\mathbf{B}|$ where $\mathbf{B}_{GT}$ is the ground truth. Further, we can utilize some Python packages like `Pillow` to render the texts and meanwhile obtain the character-level segmentation mask $\mathbf{C}$ with $|\mathcal{A}|$ channels, where $|\mathcal{A}|$ denote the size of alphabet $\mathcal{A}$. To this end, we obtain the layouts of keywords and the image generation process is introduced next.
|
| 28 |
+
|
| 29 |
+
In this stage, we aim to generate the image guided by the segmentation masks $\mathbf{C}$ produced in the first stage. We use VAE [@kingma2013auto] to encode the original image with shape $H \times W$ into 4-D latent space features $\mathbf{F} \in \mathbb{R}^{4 \times H^{\prime} \times W^{\prime}}$. Then we sample a time step $T \sim \text{Uniform}(0,T_{\text{max}})$ and sample a Gaussian noise $\boldsymbol{\epsilon} \in \mathbb{R}^{4 \times H^{\prime} \times W^{\prime}}$ to corrupt the original feature, yielding $\hat{\textbf{F}} = \sqrt{\bar{\alpha}_T} \mathbf{F}+\sqrt{1-\bar{\alpha}_T} \boldsymbol{\epsilon}$ where $\bar{\alpha_{T}}$ is the coefficient of the diffusion process introduced in [@ho2020denoising]. Also, we downsample the character-level segmentation mask $\textbf{C}$ with three convolution layers, yielding 8-D $\hat{\textbf{C}} \in \mathbb{R}^{8 \times H^{\prime} \times W^{\prime}}$. We also introduce two additional features, called 1-D feature mask $\hat{\textbf{M}} \in \mathbb{R}^{1 \times H^{\prime} \times W^{\prime}}$ and 4-D masked feature $\hat{\textbf{F}}_{M} \in \mathbb{R}^{4 \times H^{\prime} \times W^{\prime}}$. In the process of *whole-image generation*, $\hat{\textbf{M}}$ is set to cover all regions of the feature and $\hat{\textbf{F}}_{M}$ is the feature of a fully masked image. In the process of *part-image generation* (also called text inpainting), the feature mask $\hat{\textbf{M}}$ represents the region where the user wants to generate, while the masked feature $\hat{\textbf{F}}_{M}$ indicates the region that the user wants to preserve. To simultaneously train two branches, we use a masking strategy where a sample is fully masked with a probability of $\sigma$ and partially masked with a probability of $1-\sigma$. We concatenate $\hat{\textbf{F}},\hat{\textbf{C}},\hat{\textbf{M}},\hat{\textbf{F}}_{M}$ in the feature channel as a 17-D input and use denoising loss between the sampled noise $\boldsymbol{\epsilon}$ and the predicted noise $\boldsymbol{\epsilon}_{\theta}$: $$\begin{equation}
|
| 30 |
+
\begin{aligned}
|
| 31 |
+
l_{denoising} = || \boldsymbol{\epsilon} - \boldsymbol{\epsilon}_{\theta}( \hat{\textbf{F}},\hat{\textbf{C}},\hat{\textbf{M}},\hat{\textbf{F}}_{M},\mathcal{P},T) ||^{2}_{2}.
|
| 32 |
+
\end{aligned}
|
| 33 |
+
\label{eq:denoise}
|
| 34 |
+
\end{equation}$$
|
| 35 |
+
|
| 36 |
+
Furthermore, we propose a character-aware loss to help the model focus more on text regions. In detail, we pre-train a U-Net [@ronneberger2015u] that can map latent features to character-level segmentation masks. During training, we fix its parameters and only use it to provide guidance by using a cross-entropy loss $l_{char}$ with weight $\lambda_{char}$ (See more details in Appendix A). Overall, the model is optimized with $$\begin{equation}
|
| 37 |
+
\begin{aligned}
|
| 38 |
+
l = l_{denoising} + \lambda_{char} * l_{char}.
|
| 39 |
+
\end{aligned}
|
| 40 |
+
\label{eq:overall}
|
| 41 |
+
\end{equation}$$
|
| 42 |
+
|
| 43 |
+
Finally, the output features are fed into the VAE decoder to obtain the images.
|
| 44 |
+
|
| 45 |
+
TextDiffuser provides a high degree of controllability and flexibility during inference in the following ways: (1) Generate images from user prompts. Notably, the user can modify the generated layout or edit the text to meet their personalized requirements; (2) The user can directly start from the second stage by providing a template image (*e.g.*, a scene image, handwritten image, or printed image), and a segmentation model is pre-trained to obtain the character-level segmentation masks (Appendix B); (3) Users can modify the text regions of a given image using text inpainting. Moreover, this operation can be performed multiple times. These experimental results will be presented in the next section.
|
| 46 |
+
|
| 47 |
+
As there is no large-scale dataset designed explicitly for text rendering, to mitigate this issue, we collect 10 million i[**ma**]{.underline}ge-text pai[**r**]{.underline}s w[**i**]{.underline}th [**O**]{.underline}CR annotations to construct the **MARIO-10M Dataset**. We further collect the **MARIO-Eval Benchmark** from the subset of the MARIO-10M test set and other existing sources to serve as a comprehensive tool for evaluating text rendering quality.
|
| 48 |
+
|
| 49 |
+
The **MARIO-10M** is a collection of about 10 million high-quality and diverse image-text pairs from various data sources such as natural images, posters, and book covers. Figure [3](#fig:dataset){reference-type="ref" reference="fig:dataset"} illustrates some examples from the dataset. We design automatic schemes and strict filtering rules to construct annotations and clean noisy data (more details in Appendix D and Appendix E). The dataset contains comprehensive OCR annotations for each image, including text detection, recognition, and character-level segmentation annotations. Specifically, we use DB [@liao2022real] for detection, PARSeq [@bautista2022scene] for recognition, and manually train a U-Net [@ronneberger2015u] for segmentation. We analyze the performance of OCR tools in Appendix F. The total size of MARIO-10M is 10,061,720, from which we randomly chose 10,000,000 samples as the training set and 61,720 as the testing set. MARIO-10M is collected from three data sources:
|
| 50 |
+
|
| 51 |
+
**MARIO-LAION** derives from the large-scale datasets LAION-400M [@schuhmann2021laion]. After filtering, we obtained 9,194,613 high-quality text images with corresponding captions. This dataset comprises a broad range of text images, including advertisements, notes, posters, covers, memes, logos, etc.
|
| 52 |
+
|
| 53 |
+
**MARIO-TMDB** derives from [[The Movie Database (TMDB)]{style="color: black"}](https://www.themoviedb.org/), which is a community-built database for movies and TV shows with high-quality posters. We filter 343,423 English posters using the [[TMDB API]{style="color: black"}](https://www.themoviedb.org/documentation/api) out of 759,859 collected samples. Since each image has no off-the-shelf captions, we use prompt templates to construct the captions according to movie titles.
|
| 54 |
+
|
| 55 |
+
**MARIO-OpenLibrary** derives from Open Library, which is an open, editable library catalog that creates a web page for each published book. We first collect 6,352,989 original-size Open Library covers in [[bulk]{style="color: black"}](https://openlibrary.org/dev/docs/api/covers). Then, we obtained 523,684 higher-quality images after filtering. Like MARIO-TMDB, we manually construct captions using titles due to the lack of off-the-shelf captions.
|
| 56 |
+
|
| 57 |
+
<figure id="fig:dataset" data-latex-placement="t">
|
| 58 |
+
<embed src="images_pdf/dataset.pdf" style="width:100.0%" />
|
| 59 |
+
<figcaption>Illustrations of three subsets of MARIO-10M. See more details in Appendix C.</figcaption>
|
| 60 |
+
</figure>
|
| 61 |
+
|
| 62 |
+
The **MARIO-Eval benchmark** serves as a comprehensive tool for evaluating text rendering quality collected from the subset of the MARIO-10M test set and other sources. It comprises 5,414 prompts in total, including 21 prompts from DrawBenchText [@saharia2022photorealistic], 175 prompts from DrawTextCreative [@liu2022character], 218 prompts from ChineseDrawText [@ma2023glyphdraw] and 5,000 image-text pairs from a subset of the MARIO-10M test set. The 5,000 image-text pairs are divided into three sets of 4,000, 500, and 500 pairs, and are named LAIONEval4000, TMDBEval500, and OpenLibraryEval500 based on their respective data sources. We offer examples in Appendix G to provide a clearer understanding of MARIO-Eval.
|
| 63 |
+
|
| 64 |
+
**Evaluation Criteria:** We evaluate text rendering quality with MARIO-Eval from four aspects: (1) **Fréchet Inception Distance (FID)** [@NIPS2017_8a1d6947] compares the distribution of generated images with the distribution of real images. (2) **CLIPScore** calculates the cosine similarity between the image and text representations from CLIP [@huang2021unifying; @radford2021learning; @hessel2021clipscore]. (3) **OCR Evaluation** utilizes existing OCR tools to detect and recognize text regions in the generated images. Accuracy, Precision, Recall, and F-measure are metrics to evaluate whether keywords appear in the generated images. (4) **Human Evaluation** is conducted by inviting human evaluators to rate the text rendering quality of generated images using questionnaires. More explanations are shown in Appendix H.
|