FigAgent / 2003.06658 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
c2d9331 verified

Introduction

As a crucial characteristic of human cognition, systematic generalization reflects people's ability to learn infinite combinations of finite concepts [@chomsky1957syntactic; @montague1970universal]. However, weak systematic compositionality has been considered as a primary obstacle to the expression of language and thought in connectionist networks for a long time [@fodor1988connectionism; @hadley1994systematicity; @marcus1998rethinking; @fodor2002compositionality; @frank2009connectionist; @brakel2009strong; @marcus2018algebraic]. Whether models can generalize systematically is still an appealing research topic until now. Recent works state that modern neural networks have not mastered these language-based generalization challenges in multiple explicitly proposed datasets [@lake2017generalization; @bastings-etal-2018-jump; @keysers2019measuring; @hupkes2020compositionality; @kim2020cogs]. These studies conclude that models lack such cognitive capacity, which calls for a more systematic study. Apart from the proposal of benchmarks, existing research mainly focuses on novel architectural designs [@ChenLYSZ20] or meta-learning [@NEURIPS2019_f4d0e2e7; @conklin-etal-2021-meta] to enable systematic generalization.

In this work, however, we question that whether neural networks are indeed deficient or just conventional learning protocols unable to exploit their full potential [@csordas2021devil]. Inspired by meaningful learning from the field of educational psychology [@mayer2002rote], we revisit systematic generalization and explore semantic linking. Specifically, we propose augmenting prior knowledge to build relation links between new concepts and existing ones through either inductive learning or deductive learning as what humans do in meaningful verbal learning [@ausubel1963psychology]. To elaborate, inductive learning is a bottom-up approach from the more specific to the mode general. By introducing new concepts sharing the same context with existing ones in specific samples, we hope the model can capture the underlying semantic connections and thus generalize to novel compositions of new concepts. On the contrary, deductive learning is a top-down approach from the mode general to the more specific. By involving a rule-like concept dictionary without specific context information, we hope the model can utilize the general cross-lingual supervised signals as anchor points so as to launch the semantic linking. We mainly focus on three semantic relationships, namely, lexical variant, co-hyponym, and synonym.

Starting from SCAN, our experiments confirm that, with semantic linking, even canonical neural networks can significantly improve its systematic generalization capability. Moreover, this holds consistent across two more semantic parsing datasets. As an ablation study, we further examine such one-shot compositional generalization and find that both prior knowledge and semantic linking take essential parts. Lastly, we extend from toy sets to real data and explain how semantic linking, as data augmentation techniques, benefits models' performance in solving real problems such as machine translation and semantic parsing.

Overall, our contributions are as follows: ($1$) We formally introduce semantic linking for systematic generalization through the analysis of inductive and deductive learning from a meaningful learning perspective. ($2$) We observe that modern neural networks can achieve systematic generalization with semantic linking. ($3$) We show that both prior knowledge and semantic linking play a key role in systematic generalization, which is in line with meaningful learning theory. ($4$) We extend from SCAN to real data and demonstrate that many recent data augmentation methods belong to either inductive learning or deductive learning.

An illustration of the semantic linking injection pipeline in SCAN. Models are expected to generalize to new compositions of variants after augmenting the prior knowledge through either inductive learning or deductive learning.

Learning new concepts by relating them to the existing ones is defined as a process of meaningful learning in educational psychology [@ausubel1963psychology; @mayer2002rote]. The utilization of meaningful learning can encourage learners to understand information continuously built on concepts the learners already understand [@okebukola1988cognitive]. Following the same idea, we intend to examine models' systematic compositionality by exploring semantic linking, an augmentation that establishes semantic relations between primitives $\sP$ (old concepts) and their variants $\sV:={\sV_{\evp} | \forall \evp \in \sP}$ (new concepts). To spoon-feed semantic knowledge to models for semantic linking, we propose to augment the training data by either inductive learning or deductive learning [@hammerly1975deduction; @shaffer1989comparison; @thornbury1999teach]. In this section, we discuss the definition of semantic linking and take "jump" from SCAN as an example primitive to illustrate the learning scheme in Figure 1{reference-type="ref" reference="fig:pipeline"}.

We aim to achieve systematic generalization by exposing semantic links such as lexical variants, co-hyponyms, and synonyms. Lexical Variant refers to an alternative expression form for the same concept. Co-hyponym is a linguistic term to designate a semantic relation between two group members belonging to the same broader class, where each member is a hyponym and the class is a hypernym [@lyons1995linguistic]. Synonym stands for a word, morpheme, or phrase that shares exactly or nearly the same semantics with another one. We provide an example and a detailed description in Appendix.

Inductive learning is a bottom-up approach from the more specific to the more general. For example, fitting a machine learning model is a process of induction, where the model itself is the hypothesis that best fits the observed training data [@Mitchell97]. In grammar teaching, inductive learning is a rule-discovery approach that starts with the presentation of specific examples from which a general rule can be inferred [@thornbury1999teach]. Inspired by that, we propose to augment data inductively by introducing variants sharing the same context with their primitives in specific samples. The assumption is that models can observe the interchange of primitives and their variants surrounded by the same context in the hope of coming up with a general hypothesis that there is a semantic linking between primitives and their variants [@harris1954distributional]. Formally, we describe inductive learning as follows. For a sequence-to-sequence task $\mathcal{T} : \mX \rightarrow \mY$, we have a source sequence $\vx \in \mX$ and its target sequence $\vy \in \mY$. We prepare prompts set $\mZ := { \vz = f_{prompt}(\vx) | \vx \in \mX}$, where $f_{prompt}(\cdot)$ replaces the primitive in $\vx$ with a slot mark $[z_{p}]$ as in Figure 1{reference-type="ref" reference="fig:pipeline"}.[^2] Then, we generate $\mX^{IL} := { \vx^{IL} = f_{fill}(\vz, \evv) | \vz \in \mZ, \evv \in \sV }$ by filling $[z_{\evp}]$ with variants in $\sV_{\evp}$. There is no change from the target side, so we get $\mY^{IL}$ by copying $\vy$ as $\vy^{IL}$ for each $\vx^{IL}$ correspondingly. Finally, we train models on $( \big[\begin{smallmatrix} \mX \ \mX^{IL} \end{smallmatrix}\big] , \big[\begin{smallmatrix} \mY \ \mY^{IL} \end{smallmatrix}\big] )$ to operate semantic linking inductively.

Deductive Learning, on the opposite of inductive learning, is a top-down approach from the more general to the more specific. As a rule-driven approach, teaching in a deductive manner often begins with presenting a general rule and is followed by specific examples in practice where the rule is applied [@thornbury1999teach]. To align with this definition, we intend to augment data deductively by combining a bilingual dictionary that maps primitives and their variants to the same in the target domain. This additional dictionary, hence, mixes the original training task with word translation [@mikolov2013exploiting]. Without any specific context, we hope the model can utilize the general cross-lingual supervised signals as anchor points so as to launch the semantic linking. Formally, we describe deductive learning as follows. We first treat $\sP$ as the source dataset $\mX^{DL}{\sP}$ directly and then prepare the corresponding target dataset $\mY^{DL}{\sP}$ by either decomposing samples from $\mY$ manually or feeding $\mX^{DL}{\sP}$ to a trained external model. Similarly, we can consider $\sV$ as another source dataset $\mX^{DL}{\sV}$ and prepare its target dataset $\mY^{DL}{\sV}$ by copying the corresponding $\vy^{DL}{\sP}$ as $\vy^{DL}{\sV}$ for all $\vx^{DL}{\sV}$ as variants of each $\vx^{DL}{\sP}$. After all, we get $\mX^{DL}$ as $\big[\begin{smallmatrix} \mX^{DL}{\sP} \ \mX^{DL}{\sV} \end{smallmatrix}\big]$ and $\mY^{DL}$ as $\big[\begin{smallmatrix} \mY^{DL}{\sP} \ \mY^{DL}{\sV} \end{smallmatrix}\big]$. The mapping from $\mX^{DL}$ to $\mY^{DL}$ is a dictionary to translate primitives and their variants to the same targets without any specific context information. We name $(\vx^{DL}, \vy^{DL})$ as a concept rule, $(\vx^{DL}{\sP}, \vy^{DL}{\sP})$ as a primitive rule, and $(\vx^{DL}{\sV}, \vy^{DL}_{\sV})$ as a variant rule since they are more rule-like without contexts. We train models on $( \big[\begin{smallmatrix} \mX \ \mX^{DL} \end{smallmatrix}\big] , \big[\begin{smallmatrix} \mY \ \mY^{DL} \end{smallmatrix}\big] )$ to operate semantic linking deductively.

Although previous studies argue that neural networks fail to match humans in systematic generalization [@lake2017generalization; @keysers2019measuring], we revisit such algebraic compositionality conditioned on the semantic linking to see whether the conclusion will change. The following section moves on to specify the process and outcome of experiments. We first intend to make use of SCAN as the initial testbed to observe the presence of systematic generalization with the assistance of semantic relations. Then, we verify neural networks' potential to achieve the systematic generalization activated by semantic linking on SCAN, as well as two real-world tasks of semantic parsing. Following ablation studies further examine models' compositional capability.

There is evidence suggesting that SCAN may be far from enough to fully capture the kind of generalization, where even a simple model can behave as if it owns comparable skills [@bastings-etal-2018-jump; @keysers2019measuring]. Thus, starting from SCAN, we introduce GEO and ADV generated respectively from real semantic parsing datasets: Geography and Advising.[^3] Modification on datasets is specified in each experiment for the goal of examining machines' systematic generalization across various conditions.

SCAN is one of the benchmarks to investigate neural networks' compositional generalization [@lake2017generalization]. It includes 20910 pairs of commands in English to their instructed action sequences [^4]. We define $\sP^{SCAN} := { \textit{jump}", \textit{look}", \textit{run}", \textit{walk}" }$ to be in line with previous works. We focus on lexical variants and create $\sV^{SCAN}$ by adding a suffix that consists of an underline and a unique number for each primitive. We control $|\sV^{SCAN}|$ by setting the upper limit of this number. An example variant of "jump" is "jump_0" and both mean the same action "JUMP".

Geography is a common semantic parsing dataset [@data-geography-original; @data-atis-geography-scholar]. It is also named as geo880 since it contains 880 examples of queries about US geography in natural language paired with corresponding query expressions. It is later formatted to SQL language with variables in the target sequences [@data-sql-advising]. GEO is the dataset generated based on Geography, where we regard 4 of 9 annotated variables as hypernyms and keep them as they are in SQL sequences. The other variables are restored by entities from the source sequence accordingly. As a result, the overall data size is 618 after processing and we can make use of the "is-a" hypernymy relations between entities and variables for semantic linking. To be specific, we define $\sP^{GEO} :={\textit{new york city}", \textit{mississippi rivier}", \textit{dc}", \textit{dover}" }$ with $\sV^{GEO}$ consisting of entities as co-hyponyms sharing the same variable group with primitives.[^5] An example variant of "new york city" is "houston city" and both are in the same variable group "CITY_NAME".

Advising, as our second semantic parsing dataset, includes 4570 questions about course information in natural language paired with queries in SQL [@data-sql-advising]. Similar to GEO, ADV is generated on the basis of Advising with 4 of 26 variables as hypernyms. Precisely, we define $\sP^{ADV}:={ \textit{a history of american film}", \textit{aaron magid}", \textit{aaptis}", \textit{100}" }$ and $\sV^{ADV}$ as co-hyponyms of primitives sharing the same variables. For instance, "advanced at ai techniques" is a co-hyponym of "a history of american film" sharing the same variable "TOPIC".

What follows is an account of network configurations and experimental settings. Without specific instruction, they are shared throughout experiments.

Models. After testing a range of their adapted versions, we employ three dominant model candidates with an encoder-decoder framework [@sutskever2014sequence], that is, RNN, CNN, and TFM. In terms of RNN, we reproduce bi-directional recurrent networks [@schuster1997bidirectional] with long short-term memory units [@hochreiter1997long] and an attention mechanism [@bahdanau2014neural]. We follow the convolutional seq2seq architecture presented by @gehring2017convolutional with regard to CNN and the attention-based structure proposed by @NIPS2017_7181 in the case of TFM. More details are provided in Appendix.

Training. We apply the mini-batch strategy to sample 128 sequence pairs for each training step. We use Adam optimizer [@DBLP:journals/corr/KingmaB14] with an $\ell_2$ gradient clipping of $5.0$ [@10.5555/3042817.3043083] and a learning rate of 1$e^{-4}$ to minimize a cross-entropy loss. We freeze the maximum training epoch at 320 for CNN and 640 for RNN and TFM. In contrast to early stopping [@prechelt1998early], we prefer a fixed training regime sufficient enough for models to fully converge in practice with a focus on the systematic generalization observation instead of superior structure exploration. To prevent uncontrolled interference, we train all models from scratch instead of fine-tuning [@devlin-etal-2019-bert].

Evaluation. Token accuracy and sequence accuracy serve as two primary metrics in the following experiments. The former is a soft metric that allows partial errors in a sequence, while the latter is tricky and strictly does not. The reported results, along with standard deviation, are the mean of five runs.