SlowGuess's picture
Add Batch c7086fd8-5ba9-4de6-848d-27ddd1a6ae2b
1df80b9 verified

Adaptive Structure Induction for Aspect-based Sentiment Analysis with Spectral Perspective

Hao Niu, Yun Xiong*, Xiaosu Wang, Wenjing Yu, Yao Zhang, Zhonglei Guo

Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University {hniu18, yunx, xswang19, yaozhang, guozl18}@fudan.edu.cn {wjyu21}@m.fudan.edu.cn

Abstract

Recently, incorporating structure information (e.g. dependency syntactic tree) can enhance the performance of aspect-based sentiment analysis (ABSA). However, this structure information is obtained from off-the-shelf parsers, which is often sub-optimal and cumbersome. Thus, automatically learning adaptive structures is conducive to solving this problem. In this work, we concentrate on structure induction from pre-trained language models (PLMs) and throw the structure induction into a spectrum perspective to explore the impact of scale information in language representation on structure induction ability. Concretely, the main architecture of our model is composed of commonly used PLMs (e.g., RoBERTa, etc.), and a simple yet effective graph structure learning (GSL) module (graph learner + GNNs). Subsequently, we plug in Frequency Filters with different bands after the PLMs to produce filtered language representations and feed them into the GSL module to induce latent structures. We conduct extensive experiments on three public benchmarks for ABSA. The results and further analyses demonstrate that introducing this spectral approach can shorten Aspects-sentiment Distance (AsD) and be beneficial to structure induction. Even based on such a simple framework, the effects on three datasets can reach SOTA (state-of-the-art) or near SOTA performance. Additionally, our exploration also has the potential to be generalized to other tasks or to bring inspiration to other similar domains.

1 Introduction

Aspect-based sentiment analysis (ABSA) is designed to do fine-grained sentiment analysis for different aspects of a given sentence (Vo and Zhang, 2015; Dong et al., 2014). Specifically, one or more aspects are present in a sentence, and aspects may express different sentiment polarities. The purpose

of the task is to detect the sentiment polarities (i.e., POSITIVE, NEGATIVE, NEUTRAL) of all given aspects. Given the sentence "The decor is not a special at all but their amazing food makes up for it" and corresponding aspects "decor" and "food", the sentiment polarity towards "decor" is NEGATIVE, whereas the sentiment for "food" is POSITIVE.

Early works (Vo and Zhang, 2015; Kiritchenko et al., 2014; Schouten and Frasincar, 2016) to deal with ABSA mainly relied on manually designing syntactic features, which is cumbersome and ineffective as well. Subsequently, various neural network-based models (Kiritchenko et al., 2014; Vo and Zhang, 2015; Chen et al., 2017; Zhang et al., 2019b; Wang et al., 2020; Trusca et al., 2020) have been proposed to deal with ABSA tasks, to get rid of hand-crafted feature design. In these studies, syntactic structures proved effective, helping to connect aspects to the corresponding opinion words, thereby enhancing the effectiveness of the ABSA task (Zhang et al., 2019b; Tian et al., 2021; Veyseh et al., 2020; Huang and Carley, 2019; Sun et al., 2019; Wang et al., 2020). Additionally, some research (Chen et al., 2020a; Dai et al., 2021; Zhou et al., 2021; Chen et al., 2022; Brauwers and Frasincar, 2023) suggests there should exist task-specific induced latent structures because dependency syntactic structures (following that, we refer to them as external structures for convenience) generated by off-the-shelf dependency parsers are static and sub-optimal in ABSA. The syntactic structure is not specially designed to capture the interactions between aspects and opinion words.

Consequently, we classify these structure-based ABSA models into three categories by summarizing prior research: (1.) external structure, (2.) semi-induced structure, and (3.) full-induced structure. Works based on external structures use dependency syntactic structures generated by dependency parsers or modified dependency syntactic structures to provide structural support for ABSA (Zhang

et al., 2019b; Sun et al., 2019; Wang et al., 2020). Studies based on semi-induced structures leverage both external and induced structures, merging them to offer structural support for ABSA (Chen et al., 2020a). The first two categories require the introduction of external structures, which increases the complexity of preprocessing, while the third category directly eliminates this burdensomeness.

Our research is based on full-induced structures. Works in this field intend to totally eliminate the reliance on external structures to aid ABSA by employing pre-trained language models (PLMs) to induce task-specific latent structures (Dai et al., 2021; Zhou et al., 2021; Chen et al., 2022). These efforts, however, aim to create a tree-based structure, then convert it into a graph structure and feed it to Graph Neural Networks (GNNs) to capture structural information. Our research follows this line of thought, but directly from the perspective of the graph, utilizing PLMs to induce a graph structure for GNNs. In addition, studies (Tamkin et al., 2020) have shown that contextual representation contains information about context tokens as well as a wide range of linguistic phenomena, including constituent labels, relationships between entities, dependencies, coreference, etc. That is, there are various scales of information (spanning from the (sub)word itself to its containing phrase, clause, sentence, paragraph, etc.) in the contextual representation. This contextual representational characteristic has rarely been explored in previous studies. Therefore, our research investigates the influence of manipulations at informational scales of contextual representation on structure induction with spectral perspective.

Specifically, we employ graph structure learning (GSL) based on metric learning (Zhu et al., 2021) to induce latent structures from PLMs. We investigate three commonly used metric functions (Attention-based (Attn.), Kernel-based (Knl.), and Cosine-based (Cosine)) and contrast their effects on the structure of induced graphs. Furthermore, we heuristically explore four types of Frequency Filters with corresponding band allocations (HIGH, MID-HIGH, MID-LOW, LOW) acting on contextual representations, and in this way, we can segregate the representations of different scales at the level of individual neurons. Additionally, we introduce an automatic frequency selector (AFS) to circumvent the cumbersome heuristic approaches. This allows us to investigate the impact of manipu

lations at scale information for structure induction in contextual representations.

We employ three commonly PLMs: BERTbase, RoBERTabase, RoBERTalarge. Our research is based on extensive experiments and yields some intriguing findings, which we summarize as follows:

Structure Induction. By comparing three GSL methods (Attention-based (Attn.), Kernel-based (Knl.), and Cosine-based (Cosine)), we find that the Attention-based method is the best for structure induction on ABSA.

Frequency Filter (FLT). Heuristic operations of information scales in the contextual representation by Frequency Filters are able to influence structure induction. Based on Attention-based GSL, the structure induction of FLT can obtain lower Aspects-sentiment Distance (AsD) and better performance.

Automatic Frequency Selector (AFS). Get rid of the tediousness of the heuristic method, AFS can consistently achieve better results than the Attention-based GSL method. This further demonstrates the effectiveness of manipulating scale information.

2 Related Work

2.1 Tree Induction for ABSA

In ABSA, there are a lot of works that aim to integrate dependency syntactic information into neural networks (Zhang et al., 2019b; Sun et al., 2019; Wang et al., 2020) to enhance the performance of ABSA. Despite the improvement of dependency tree integration, this is still not ideal since off-the-shelf dependency parsers are static, have parsing errors, and are suboptimal for a particular task. Hence, some effort is being directed toward dynamically learning task-specific tree structures for ABSA. For example, (Chen et al., 2020a) combines syntactic dependency trees and automatically induced latent graph structure by a gate mechanism. (Chen et al., 2022) propose to induce an aspect-specific latent tree structure by utilizing policy-based reinforcement learning. (Zhou et al., 2021) learn an aspect-specific tree structure from the perspective of closing the distance between aspect and opinion. (Dai et al., 2021) propose to induce tree structure from fine-tuned PLMs for ABSA. However, most of them fall to take the context representational characteristic into account.

2.2 Spectral Approach in NLP

In NLP, one line of spectral methods is used in the study of improving efficiency (Han et al., 2022; Zhang et al., 2018). For example, (Han et al., 2022) propose a new type of recurrent neural network with the help of the discrete Fourier transformer and gain faster training. In addition, a few works investigate contextual representation learning from the standpoint of spectral methods. (Kayal and Tsatsaronis, 2019) propose a method to construct sentence embeddings by exploiting a spectral decomposition method rooted in fluid dynamics. (Müller-Eberstein et al., 2022; Tamkin et al., 2020) propose using Frequency Filters to constrain different neurons to model structures at different scales. These bring new inspiration to the research of language representation.

2.3 Metric Learning based GSL

The metric learning approach is one of representative graph structure learning (GSL), where edge weights are derived from learning a metric function between pairwise representations (Zhu et al., 2021). According to metric functions, the metric learning approach can be categorized into two subgroups: Kernel-based and Attention-based. Kernel-based approaches utilize traditional kernel functions as the metric function to model edge weights (Li et al., 2018; Yu et al., 2020; Zhao et al., 2021b). Attention-based approaches usually utilize attention networks or more complicated neural networks to capture the interaction between pairwise representations (Velickovic et al., 2018; Jiang et al., 2019; Chen et al., 2020b; Zhao et al., 2021a). The Cosine-based method (Chen et al., 2020b) is generally a kind of Attention-based method. In our experiments, we take it out as a representative method.

3 Method

To obtain induced graph structure, we propose a spectral filter (FLT) approach to select scale information when adaptively learning graph structure. In this section, we introduce a simple but effective approach (FLT) to induce graph structures from PLMs to enhance the performance of ABSA. The overall architecture is displayed in Figure 1.

3.1 Overview

As shown in Figure 1, the overall architecture is composed of PLMs, Graph Learner, GNNs architecture, and Prediction Head under normal cir


Figure 1: The overall architecture of our method.

cumstances. For a given input sentence $S = {w_{1}, w_{2}, \dots, w_{n}}$ , we employ a type of PLMs to serve as the contextual encoder to obtain the hidden contextual representation $\mathbf{H} \in \mathbb{R}^{n \times d}$ of the input sentence $S$ , where $d$ is the dimension of word representations, and $n$ is the length of the given sentence. The contextual representation $\mathbf{H}$ is waited for inputting into GNNs architecture as node representations. Simultaneously, it is going to feed into Graph Learner to induce latent graph structures, which serve as adjacency matrices $\mathbf{A}$ for GNNs architecture. Then the GNNs architecture can extract aspect-specific features $\mathbf{h}{a}$ utilizing both structural information from $\mathbf{A}$ and pre-trained knowledge information from $\mathbf{H}$ . Finally, we concatenate the representation of [CLS] token $\mathbf{h}{cls}$ from PLMs as well as $\mathbf{h}_{a}$ , and send them into a Multi-layer Perception (MLP) (served as the Prediction Head) to detect the sentiment polarities (i.e., POSITIVE, NEGATIVE, NEUTRAL) for the given aspects.

Here, we investigate the effectiveness of three common graph structure learning (GSL) methods based on metric learning: Attention-based (Attn.), Kernel-based (Knl.), and Cosine-based (Cosine) (refer to (Zhu et al., 2021) for specific descriptions of Kernel-based and Cosine-based methods). We introduce the Attention-based GSL method to adaptively induce graph structures. Firstly, we calculate the unnormalized pair-wise edge score $e_{ij}$ for the $i$ -th and $j$ -th words utilizing the given representations $\mathbf{h}_i \in \mathbb{R}^d$ and $\mathbf{h}j \in \mathbb{R}^d$ . Specifically, the pair-wise edge score $e{ij}$ is calculated as follows:

eij=(Wihi)(Wjhj),(1) e _ {i j} = \left(\mathbf {W} _ {i} \mathbf {h} _ {i}\right) \left(\mathbf {W} _ {j} \mathbf {h} _ {j}\right) ^ {\top}, \tag {1}

where $\mathbf{W}_i, \mathbf{W}_j \in \mathbb{R}^{d \times d_h}$ are learnable weights for $i$ -th and $j$ -th word representations, where $d_h$ is the hidden dimension.

Then, relying on these pair-wise scores $e_{ij}$ for all word pairs, we construct the adjacency matrices $\mathbf{A}$ for induced graph structures. Concretely,

Aij={1i fi=jexp(eij)k=1nexp(eik)o t h e r w i s e,(2) \mathbf {A} _ {i j} = \left\{ \begin{array}{c c} 1 & \text {i f} \quad i = j \\ \frac {\exp \left(e _ {i j}\right)}{\sum_ {k = 1} ^ {n} \exp \left(e _ {i k}\right)} & \text {o t h e r w i s e} \end{array} , \right. \tag {2}

where the adaptive adjacency matrix is $\mathbf{A} \in \mathbb{R}^{n \times n}$ , and $\mathbf{A}_{ij}$ is the weight score of the edge between the $i$ -th and $j$ -th words.

For simplicity, we employ Vallina Graph Neural Networks (GCNs) (Kipf and Welling, 2017) served as GNNs architecture (other variants of graph neural networks can also be employed here). Given the word representations $\mathbf{H}$ and the adaptive adjacency matrix $\mathbf{A}$ , we can construct an induced graph structure consisting of words (each word acts as a node in the graph) and feed it into GCNs. Specifically,

hil=σ(j=1nAijWlhjl1+bl),(3) \mathbf {h} _ {i} ^ {l} = \sigma \left(\sum_ {j = 1} ^ {n} \mathbf {A} _ {i j} \mathbf {W} ^ {l} \mathbf {h} _ {j} ^ {l - 1} + \mathbf {b} ^ {l}\right), \tag {3}

where $\sigma$ is an activation function (e.g. ReLU), $\mathbf{W}^l$ and $\mathbf{b}^l$ are the learnable weight and bias term of the $l$ -th GCN layer. By stacking several layers of Graph Learner and GNNs architectures, we can obtain structure information enhanced word representations $\mathbf{H}_g$ for the downstream task. It should be noted that the induced graph structure is dynamically updated while training.

After we get aspect representations $\mathbf{h}_a$ from $\mathbf{H}g$ , we feed them along with the pooler output $\mathbf{h}{cls}$ of PLMs (the output representation of [CLS] token) into a task-specific Prediction Head to acquire results for the downstream task.

3.2 Frequency Filter (FLT)

Furthermore, inspired by (Tamkin et al., 2020), we introduce a spectral analysis approach to enhance the structure induction ability of the Graph Learner. Intuitively, we tend to import a Frequency Filter on contextual word representations to manipulate on scale information, and then feed them into the Graph Learner module to improve the structure induction capability. Contextual representations have been investigated to not only convey the meaning of words in context (Peters et al., 2018), but also carry a large range of linguistic information such

Table 1: Statistics of datasets.

DatasetPositiveNeutralNegative
TrainTestTrainTestTrainTest
Rest142164728807196637196
Laptop14994341870128464169
Twitter156117331273461560173

as semantic roles, coreference, and constituent labels, etc. (Tenney et al., 2019). Prism (Tamkin et al., 2020) demonstrates these word representations contain multi-scale information ranging from (sub)word to phrase, clause, sentence, and so forth. Hence in this work, we explore the impact of structure induction ability by operating on scale-specific information of contextual representations.

To achieve this goal, we introduce a Frequency Filter (FLT) based on Discrete Fourier Transform (DFT) to conduct disentangling operations in the frequency domain. To be specific, given word representations $\mathbf{H} \in \mathbb{R}^{n \times d}$ , we feed them into the FLT before the Graph Learner. For the specific $i$ -th and $j$ -th word representations $\mathbf{h}_i \in \mathbb{R}^d$ and $\mathbf{h}j \in \mathbb{R}^d$ , the pair-wise edge score $e{ij}$ is calculated as follows:

Φflt(x)=F1(Ψ(F(x))),(4) \Phi^ {f l t} (x) = \mathcal {F} ^ {- 1} \left(\Psi (\mathcal {F} (x))\right), \tag {4}

eij=Φflt(Wihi)Φflt(Wjhj),(5) e _ {i j} = \Phi^ {f l t} \left(\mathbf {W} _ {i} \mathbf {h} _ {i}\right) \Phi^ {f l t} \left(\mathbf {W} _ {j} \mathbf {h} _ {j}\right) ^ {\top}, \tag {5}

where $\mathcal{F}(\cdot)$ and $\mathcal{F}^{-1}(\cdot)$ denote the Fast Fourier Transform (FFT) and its inverse, $\Psi$ indicates the filtering operation, and $\Phi^{flt}$ denotes the Frequency Filter (FLT). We carry out filtering at the sentence level. Subsequent operations are consistent with Section 3.1. We conduct experiments and analyses on four band allocations (HIGH, MID-HIGH, MID-LOW, LOW)). The specific band allocations are displayed in Table 5, and the analysis experiments refer to Section 4.7 and 4.10.

4 Experiment

To prove the effectiveness of our approach, we demonstrate experimental results conducted on three datasets for ABSA and compare them with previous works. We show the details as follows.

4.1 Dataset

We conduct experiments on SemEval 2014 task (Rest14 and Laptop14) (Pontiki et al., 2014) and Twitter (Dong et al., 2014) datasets, which are widely used. Each of the three datasets contains

Table 2: Overall performance of ABSA on the three datasets. According to the categorization of structure (Dep.: external structures (dependency syntactic tree), Semi.: semi-induced structures, Full: full-induced structures, and None: no structure information used), we classify the baselines accordingly, which are in the 'Structure' column.

EmbeddingModelStructureRest14Laptop14Twitter
AccuracyMacro-F1AccuracyMacro-F1AccuracyMacro-F1
Static EmbeddingdepGCNDep.80.77#72.02#75.55#71.05#
CDTDep.82.30#74.02#77.19#72.99#
kumaGCNSemi.81.4373.6476.1272.4272.4570.77
RGATDep.83.3076.0877.4273.7675.5773.82
FT-RoBERTa(ASGCN)Full82.3173.5376.3372.7673.8472.66
FT-RoBERTa(PWCN)Full82.4073.9576.9573.2173.8471.43
FT-RoBERTa(RGAT)Full82.7675.2577.4374.2175.4374.04
BERTbaseBERTNone85.62#78.28#77.58#72.38#75.2874.11
SAGATDep.85.0877.9480.3776.9475.4074.17
DGEDTDep.86.3080.0079.8075.6077.9075.40
depGCN-BERTDep.85.0078.7981.1977.6775.5874.58
RGAT-BERTDep.86.6081.3578.2174.0776.1574.88
KumaGCN-BERTSemi.86.4380.3081.9878.8177.8977.03
dotGCN-BERTFull86.1680.4981.0378.1078.1177.00
RoBERTabaseRoberta + MLPNone87.3281.0182.6079.3377.1776.20
RoBERTa-ASC(Dep)Dep.82.8275.1274.1270.52--
LCFS-ASC-CDW(Dep)Dep.86.7180.3180.5277.13--
Dep(ASGCN)Dep.86.9080.7581.6678.3175.2874.38
Dep(PWCN)Dep.87.4181.0784.1681.1876.6375.60
Dep(RGAT)Dep.87.4380.6183.4380.2874.4272.93
FT-RoBERTa(ASGCN)Full86.8780.5983.3380.3276.1075.07
FT-RoBERTa(PWCN)Full87.3580.8584.0181.0877.0275.52
FT-RoBERTa(RGAT)Full87.5281.2983.3379.9575.8174.91
FLTFull88.5783.2785.4283.0177.0275.83
RoBERTalargeFLTFull90.2785.2086.0584.6877.8977.20

three sentiment label categories: POSITIVE, NEUTRAL, and NEGATIVE. Statistics of these datasets are displayed in Table 1, where (Train|Test) denotes the number of instances on the training, and testing set for each dataset.

4.2 Experiment Settings

We utilize the popular Pre-trained Language Models (PLMs) based on Transformer Encoder architecture (BERTbase (Devlin et al., 2019), RoBERTbase and RoBERTlarge (Liu et al., 2019)) for word representations. Moreover, the hidden dimensions of all Graph Learners are 60. The dropout rate is 0.2, the batch size is 32. The number of the epoch is 60 for RoBERTbase and RoBERTlarge, and 30 for BERTbase. We use Adam optimizer (Kingma and Ba, 2015) while training with the learning rate initialized by 1e-5. Following previous works, we use Accuracy and Macro-F1 scores for metrics. All experiments are conducted on NVIDIA Tesla P100.

4.3 Baselines

We categorize the existing structure-based ASBA models into three genres: external structure, semi-induced structure, and full-induced structure. Below, we introduce each of them in detail.

External Structure. This line of works utilizes dependency syntactic structure generated by external dependency parsers (e.g. Spacy and Standford CoreNLP $^{2}$ , etc.) to offer structural information supplements for ABSA. Its delegate works as follows:

depGCN (Zhang et al., 2019a) combines BiLSTM to capture contextual information regarding word orders with multi-layered GCNs.

CDT (Sun et al., 2019) encodes both dependency and contextual information by utilizing GCNs and BiLSTM.

RGAT (Wang et al., 2020) feeds reshaped syntactic dependency graph into RGAT to capture aspect-centric information.

SAGAT (Huang et al., 2020) uses graph attention network and BERT to explore both syntax and semantic information for ABSA.

$\mathbf{D}\mathbf{G}\mathbf{E}\mathbf{D}\mathbf{T}$ (Tang et al., 2020) jointly consider BERT outputs and dependency syntactic representations by utilizing GCNs.

LCFS-ASC-CDW (Phan and Ogunbona, 2020) combine dependency syntactic embeddings, part-of-speech embeddings, and contextualized embeddings to enhance the performance of ABSA.

Table 3: Results of ablation studies.

EmbeddingModelStructureRest14Laptop14Twitter
AccuracyMacro-F1AccuracyMacro-F1AccuracyMacro-F1
BERTbaseAttn.Full85.4378.0480.5477.0676.2275.04
FLTFull87.0481.4681.1777.9777.5576.66
RoBERTa baseAttn.Full87.5981.7283.8680.5375.7273.92
FLTFull88.5783.2785.4283.0177.0275.83
RoBERTalargeAttn.Full89.4684.1284.8082.1977.0275.75
FLTFull90.2785.2086.0584.6877.8977.20

Semi-induced Structure. Works in this line tend to exploit both dependency syntactic structure from off-the-shelf parsers and induced structure from PLMs, the representative works are as follows:

KumaGCN (Chen et al., 2020a) combine latent graphs induced by self-attention neural networks and dependency syntactic structure for ABSA.

Full-induced Structure. Works in this line intend to get totally rid of external parsers and induce task-specific latent structures from PLMs for downstream tasks. Its delegate works as follows:

FT-RoBERTa (Dai et al., 2021) induce tree structures from the fine-tuned RoBERTa (fine-tune RoBERTa on the ABSA datasets in advance) by utilizing a dependency probing approach.

dotGCN (Chen et al., 2022) induce aspect-specific opinion tree structures by using Reinforcement learning and attention-based regularization.

4.4 Overall Performance

The overall results of competitive approaches and FLT on the three benchmarks are shown in Table 2. We categorize baselines according to the embedding type (static embedding (GloVe), BERTbase, RoBERTabase, and RoBERTalarge) and the structure they used (None, Dep., Semi., and Full). The parameters of PLMs are trained together with the GSL module for FLT. Compared with baselines, FLT obtains the best results except on Twitter, which obtains comparable results. We speculate that the reason is that the expression of Twitter is more casual, which leads to a limited improvement of the structure on Twitter, which is consistent with the result in (Dai et al., 2021). Compared with FT-RoBERTa-series works, the most relevant work of ours, FLT outperforms them a lot on the three datasets. And it is worth noting that FT-RoBERTa-series works need fine-tuning PLMs on the ABSA datasets in advance (Dai et al., 2021), but FLT does not need it. Therefore, FLT is simpler and more effective than FT-RoBERTa-series works.

Table 4: The impact of different metric functions based on RoBERTabase.

MetricRest14Laptop14Twitter
AccuracyMacro-F1AccuracyMacro-F1AccuracyMacro-F1
Attn.87.5981.7283.8680.5375.7273.92
Knl.87.1480.4583.5480.4476.0173.98
Cosine87.1479.9483.3979.9374.2872.80

4.5 Ablation Study

We conduct ablation studies to highlight the effectiveness of FLT, which is based on Attention-based (Attn.) GSL module and utilizing Frequency Filter. Thus, we compare Attn. and FLT on three PLMs (BERTbase, RoBERTbase, and RoBERTlarge) to show the impact of introducing Frequency Filter. Results are shown in Table 3. Compared to Attn., FLT has achieved significant improvements in consistency across three datasets utilizing different PLMs. Therefore, it can be seen that the manipulation of scale information is beneficial for enhancing performance.

4.6 Different Metric Function

In this section, we contrast the impact of three representative metric functions: Attention-based (Attn.), Kernel-based (Knl.), and Cosine-based (Cosine) on structure induction. From the insight of graph structure learning (Chen et al., 2020b; Zhu et al., 2021), the common options for metric learning include attention mechanism (Vaswani et al., 2017; Jiang et al., 2019), radial basis function kernel (Li et al., 2018; Yeung and Chang, 2007), and cosine similarity (Wojke and Bewley, 2018). We follow these previous works to implement the counterpart metric functions (Knl. and Cosine) for comparison, the results are shown in Table 4. The performance of attention-based (Attn.) on the three benchmarks gains the best results except on Twitter. But the margin between Attn. and Knl. is not big $(0.29%)$ for Accuracy and $0.06%$ for Macro-F1) on Twitter, thus we select the metric function Attn. for later analysis.

Table 5: The spectral bands we consider in this work. Since the task considered in this work is at the sentence level, we only take the scale from word to sentence into account. Here, $L$ denotes the sentence's length.

BandScalePeriod(Toks)DFT index
HIGHWord1 → 2L/2 → L
MID-HIGHPhrase2 → 6L/6 → L/2
MID-LOWClause6 → 14L/14 → L/6
LOWSentence14 → L1 → L/14

4.7 Different Frequency Filters

Table 6: Band impact based on RoBERTabase. There are statistical results for heuristic frequency selection, and the results follow the form mean(standard deviation).

FilterRest14Laptop14Twitter
AccuracyMacro-F1AccuracyMacro-F1AccuracyMacro-F1
HIGH87.54(0.55)81.33(0.97)84.21(0.43)81.50(0.57)75.83(0.34)74.76(0.42)
MID-HIGH87.55(0.53)81.31(1.06)84.39(0.78)81.69(0.95)75.71(0.78)74.68(0.72)
MID-LOW87.23(0.27)81.15(0.71)83.74(0.52)81.00(0.85)76.73(0.23)75.64(0.12)
LOW87.37(0.32)80.75(0.45)83.49(0.15)80.60(0.15)76.16(0.20)74.94(0.19)


Figure 2: The distribution of sentence length on datasets (we combine training and testing sets for this statistic).

This section analyzes the impact of four different spectral bands (HIGH, MID-HIGH, MID-LOW, LOW) on structure induction. Each band reflects a diverse range of linguistic scales from word level to sentence level, the detailed setting is shown in Table 5. The different spectral bands are revealed by their period: the number of tokens it takes to complete a cycle. For example, the word scale suggests the period of $1 \rightarrow 2$ tokens, thus the spectral band should be $L/2 \rightarrow L$ if the sentence's length denotes $L$ .

Then, we conduct analysis experiments on the three datasets to explore the impact of different spectral bands. The length $L$ in our experiments is 100, which fits the length distribution of all samples in these datasets. We perform multiple frequency selections in different frequency bands heuristically, and the performance of our model in different frequency bands on the three datasets is summarized in Table 6. Please refer to Appendix A for

the detailed frequency selection and results. Our model performs better in HIGH and MID-HIGH bands on Rest14 and Laptop14 but performs better in LOW and MID-LOW bands on Twitter. Combined with Figure 2, we find that the distribution of sentence length in Twitter is very distinct from that of Rest14 and Laptop14, the sentences in Twitter are generally longer, which leads to the fact that the clause- and sentence-scale information is more beneficial to the effect improvement.

4.8 Aspects-sentiment Distance

To illustrate the effectiveness of induced structure, following (Dai et al., 2021), we introduce the Aspects-sentiment Distance (AsD) to quantify the average distance between aspects and sentiment words in the induced structure. The AsD is calculated as follows:

C=SiC,(6) C ^ {\star} = S _ {i} \cap C, \tag {6}

AsD(Si)=AapCcqdist(ap,cq)AC,(7) A s D \left(S _ {i}\right) = \frac {\sum_ {A} ^ {a _ {p}} \sum_ {C ^ {\star}} ^ {c _ {q}} d i s t \left(a _ {p} , c _ {q}\right)}{| A | | C ^ {\star} |}, \tag {7}

AsD(D)=DAsD(Si)D,(8) A s D (D) = \frac {\sum_ {D} A s D \left(S _ {i}\right)}{| D |}, \tag {8}

where $C = \langle c_1, \dots, c_q \rangle$ is a sentiment words set (following the setting from Dai et al., 2021), $S_i$ denotes each sentence in dataset $D$ , and $A = \langle a_1, \dots, a_p \rangle$ denotes the set of aspects for each sentence. We utilize $\text{dist}(n_1, n_2)$ to calculate the relative distance between two nodes ( $n_1$ and $n_2$ ) on the graph structure, and $|\cdot|$ represent the number of elements in the given set. For a detailed setting, please refer to Appendix B.

The results are displayed in Table 7, and the less magnitude indicates the shorter distance between aspects and sentiment words. Compared to dependency structure (Dep.), attention-based GSL (Attn), and our method (FLT) shorten the Aspect-sentiment Distance greatly, which shows that GSL method encourages the aspects to find sentiment words. Furthermore, in comparison with Attn., FLT has a lower AsD score, which proves a reasonable adjustment on the scale level can obtain better structures.

4.9 Structure Visualization and Case Study

Structure Visualization. As shown in Figure 3, we visualize the difference of distinct structures: (a) is from the Spacy parser, (b) is from Attn., and (c)

StructureRest14Laptop14Twitter
Dep.8.198.028.33
Attn.2.262.552.64
FLT1.972.152.16

Table 7: The Aspects-sentiment Distance (AsD) of different trees in all datasets. The dependency tree structure (Dep.) comes from the Spacy parser ${}^{3}$ .


Figure 3: A case is from the Rest14 dataset. The colored words are aspects. The golden label for falafal is NEGATIVE, and for chicken is POSITIVE.

is the result from FLT. This case is from the Rest14 dataset. In comparison with (a), aspects are more directly connected to important sentiment words (e.g. cooked, dried, and fine) in (b) and (c), which is consistent with the results of AsD in Section 4.8. In this case, both (b) and (c) obtained correct judgment results, hence from the perspective of structure, they are relatively similar.

Case Study. In Figure 4, we provide a case to compare Attn. in (a) and FLT in (b). In this case, the structures induced by the two are quite different, and for the aspect (Chinese food), Attn. gives a wrong judgment. From the comparison of structures, it can be found that although the aspect word Chinese in (a) pays attention to the key information I can make better at home, they may not understand the semantics expressed by this clause. From the perspective of structure, FLT in (b) is obviously better able to understand the meaning of this clause.

4.10 Automatic Frequency Selector (AFS).

Furthermore, in order to illustrate the impact of the operation of the scale information on the GSL, we introduce an Automatic Frequency Selector (AFS) to select helpful frequency components along with


Figure 4: A case of Rest14 dataset. The colored words denote aspects. The golden label for Chinese food is NEGATIVE.

Table 8: The results of AFS based on RoBERTa ${}_{\text{base }}$ .

ModelRest14Laptop14Twitter
AccuracyMacro-F1AccuracyMacro-F1AccuracyMacro-F1
Attn.87.5981.7283.8680.5375.7273.92
AFS88.3082.8984.4881.6376.1675.20

the optimization of the overall model. In this way, for different datasets, the information of the corresponding scale (HIGH, MID-HIGH, etc.) can be adaptively selected to improve the effect of structure induction. Here we briefly describe the AFS, and for a detailed description, please refer to Appendix C.

Model Description. Following the operation of FLT, for an input sentence representation $\mathbf{H} \in \mathbb{R}^{n \times d}$ , we conduct Discrete Fourier Transform (DFT) $\mathcal{F}$ to transform $\mathbf{H}$ into the frequency domain. Then, we utilize AFS $\Phi^{auto}$ to adaptively select frequency components, where AFS $\Phi^{auto}$ is realized by using a Multi-layer Perceptron (MLP) architecture, please refer to the Appendix C for details. After AFS and inverse Discrete Fourier Transform $\mathcal{F}^{-1}$ , we can obtain the sentence representation $\mathbf{H}^{afs} \in \mathbb{R}^{n \times d}$ . The subsequent operations are consistent with the attention-based GSL.

Results. We utilize AFS instead of FLT to conduct experiments on the three datasets, the results are shown in Table 8. Compared to Attn., AFS is consistently improved. This further illustrates the operation of scale information is conducive to improving the effectiveness of GSL on ABSA. Compared with the heuristic FLT method, AFS avoids the burden brought by manual frequency selection, making the method more flexible.

Frequency Component Analysis. Furthermore, we conducted an in-depth analysis of the intermedi-

Table 9: Frequency Component Analysis. The spectral bands we consider in this work. Since the task considered in this work is at the sentence level, we only take the scale from word to sentence into account. Here, $L$ denotes the sentence's length.

BandRest14(%)Laptop14(%)Twitter(%)ScaleDFT index
HIGH84.7725.6487.22WordL/2 → L
MID-HIGH89.8228.6892.61PhraseL/6 → L/2
MID-LOW91.8241.0296.87ClauseL/14 → L/6
LOW99.6188.0899.19Sentence1 → L/14
Overall88.8835.2191.41--

ate results obtained from the Automatic Frequency Selector (AFS). From Table 8, we observe that incorporating AFS consistently enhances model performance without manual adjustments to Frequency Components. This suggests that the automated Frequency Components selection process is effective. Based on AFS's Frequency Component selection outcomes, we performed statistical analyses across three datasets in accordance with the spectral band distribution outlined in Table 5. Table 9 illustrates the percentage of Frequency Components selected by AFS within different spectral bands, while "Overall" represents the percentage of selected Frequency Components across all four bands.

It is evident that the results are not uniformly $100%$ , indicating that AFS indeed performs selection on Frequency Components, thereby adjusting information at various scales to achieve consistent improvements. Moreover, the percentage of selected Frequency Components varies across different datasets, implying adaptive adjustments by AFS to cater to the diverse demands of distinct samples. Notably, the LOW band exhibits the highest percentage of selected Frequency Components, underscoring the significance of sentence-level information for token-level tasks (such as Structure Induction for ABSA, which can be considered a token-level task). This observation also aligns with the conclusion drawn in reference (Müller-Eberstein et al., 2022).

5 Conclusion

In this work, we propose utilizing GSL to induce latent structures from PLMs for ABSA and introduce spectral methods (FLT and AFS) into this problem. We also explore the impact of manipulation on scale information of the contextual representation for structure induction. Extensive experiments and analyses have demonstrated that the operation of scale information of contextual representation

can enhance the effect of GSL on ABSA. Additionally, our exploration is also beneficial to provide inspiration for other similar domains.

Limitations

Though we verify the operation on various information scales can be beneficial to structure induction on ABSA, there are still some limitations. Although the heuristic FLT has achieved excellent results, it requires some manual intervention. The AFS method reduces manual participation, but its effect is worse than the optimal FLT method. However, it is still meaningful to explore the impact of scale information on the contextual representation of downstream tasks.

Acknowledgements

This work is funded in part by the National Natural Science Foundation of China Project (No.U1936213), and the Major Key Project of PCL (PCL2021A06).

References

Gianni Brauwers and Flavius Frasincar. 2023. A survey on aspect-based sentiment classification. ACM Comput. Surv., 55(4):65:1-65:37.
Chenhua Chen, Zhiyang Teng, Zhongqing Wang, and Yue Zhang. 2022. Discrete opinion tree induction for aspect-based sentiment analysis. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2051-2064. Association for Computational Linguistics.
Chenhua Chen, Zhiyang Teng, and Yue Zhang. 2020a. Inducing target-specific latent structures for aspect sentiment classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 5596-5607. Association for Computational Linguistics.
Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In EMNLP, pages 452–461. Association for Computational Linguistics.
Yu Chen, Lingfei Wu, and Mohammed J. Zaki. 2020b. Iterative deep graph learning for graph neural networks: Better and robust node embeddings. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.

Junqi Dai, Hang Yan, Tianxiang Sun, Pengfei Liu, and Xipeng Qiu. 2021. Does syntax matter? A strong baseline for aspect-based sentiment analysis with roberta. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1816-1829. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1), pages 4171–4186. Association for Computational Linguistics.
Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In ACL (2), pages 49-54. The Association for Computer Linguistics.
Bing Han, Cheng Wang, and Kaushik Roy. 2022. Oscillatory fourier neural network: A compact and efficient architecture for sequential processing. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 6838-6846. AAAI Press.
Binxuan Huang and Kathleen M. Carley. 2019. Syntax-aware aspect level sentiment classification with graph attention networks. In EMNLP/IJCNLP (1), pages 5468-5476. Association for Computational Linguistics.
Lianzhe Huang, Xin Sun, Sujian Li, Linhao Zhang, and Houfeng Wang. 2020. Syntax-aware graph attention network for aspect-level sentiment classification. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 799-810. International Committee on Computational Linguistics.
Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang, and Bin Luo. 2019. Semi-supervised learning with graph learning-convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 11313-11320. Computer Vision Foundation / IEEE.
Subhradeep Kayal and George Tsatsaronis. 2019. Eigensent: Spectral sentence embeddings using higher-order dynamic mode decomposition. In Proceedings of the 57th Conference of the Association

for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 4536-4546. Association for Computational Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR (Poster).
Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif M. Mohammad. 2014. Nrc-canada-2014: Detecting aspects and sentiment in customer reviews. In Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval@COLING 2014, Dublin, Ireland, August 23-24, 2014, pages 437-442. The Association for Computer Linguistics.
Ruoyu Li, Sheng Wang, Feiyun Zhu, and Junzhou Huang. 2018. Adaptive graph convolutional neural networks. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 3546-3553. AAAI Press.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relaxation of discrete random variables. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Max Müller-Eberstein, Rob van der Goot, and Barbara Plank. 2022. Spectral probing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 7730-7741. Association for Computational Linguistics.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227-2237. Association for Computational Linguistics.

Minh-Hieu Phan and Philip O. Ogunbona. 2020. Modelling context and syntactical features for aspect-based sentiment analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 3211-3220. Association for Computational Linguistics.
Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In SemEval@COLING, pages 27-35. The Association for Computer Linguistics.
Kim Schouten and Flavius Frasincar. 2016. Survey on aspect-level sentiment analysis. IEEE Trans. Knowl. Data Eng., 28(3):813-830.
Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. 2019. Aspect-level sentiment analysis via convolution over dependency tree. In EMNLP/IJCNLP (1), pages 5678-5687. Association for Computational Linguistics.
Alex Tamkin, Dan Jurafsky, and Noah D. Goodman. 2020. Language through a prism: A spectral approach for multiscale language representations. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Hao Tang, Donghong Ji, Chenliang Li, and Qiji Zhou. 2020. Dependency graph enhanced dual-transformer structure for aspect-based sentiment classification. In ACL, pages 6578-6588. Association for Computational Linguistics.
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Yuanhe Tian, Guimin Chen, and Yan Song. 2021. Aspect-based sentiment analysis with type-aware graph convolutional networks and layer ensemble. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2910-2922. Association for Computational Linguistics.
Maria Mihaela Trusca, Daan Wassenberg, Flavius Frasincar, and Rommert Dekker. 2020. A hybrid approach for aspect-based sentiment analysis using deep contextual word embeddings and hierarchical attention. In Web Engineering - 20th International

Conference, ICWE 2020, Helsinki, Finland, June 9-12, 2020, Proceedings, volume 12128 of Lecture Notes in Computer Science, pages 365-380. Springer.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008.
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
Amir Pouran Ben Veyseh, Nasim Nouri, Franck Dernoncourt, Quan Hung Tran, Dejing Dou, and Thien Huu Nguyen. 2020. Improving aspect-based sentiment analysis with gated graph convolutional networks and syntax-based regulation. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 4543-4548. Association for Computational Linguistics.
Duy-Tin Vo and Yue Zhang. 2015. Target-dependent twitter sentiment classification with rich automatic features. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 1347-1353. AAAI Press.
Kai Wang, Weizhou Shen, Yunyi Yang, Xiaojun Quan, and Rui Wang. 2020. Relational graph attention network for aspect-based sentiment analysis. In ACL, pages 3229-3238. Association for Computational Linguistics.
Nicolai Wojke and Alex Bewley. 2018. Deep cosine metric learning for person re-identification. In 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018, Lake Tahoe, NV, USA, March 12-15, 2018, pages 748-756. IEEE Computer Society.
Dit-Yan Yeung and Hong Chang. 2007. A kernel approach for semisupervised metric learning. IEEE Trans. Neural Networks, 18(1):141-149.
Donghan Yu, Ruohong Zhang, Zhengbao Jiang, Yuexin Wu, and Yiming Yang. 2020. Graph-revised convolutional network. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2020, Ghent, Belgium, September 14-18, 2020, Proceedings, Part III, volume 12459 of Lecture Notes in Computer Science, pages 378-393. Springer.
Chen Zhang, Qiuchi Li, and Dawei Song. 2019a. Aspect-based sentiment classification with aspect-specific graph convolutional networks. In EMNLP/IJCNLP (1), pages 4567-4577. Association for Computational Linguistics.

Chen Zhang, Qiuchi Li, and Dawei Song. 2019b. Syntax-aware aspect-level sentiment classification with proximity-weighted convolution network. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, pages 1145-1148. ACM.

Jiong Zhang, Yibo Lin, Zhao Song, and Inderjit S. Dhillon. 2018. Learning long term dependencies via fourier recurrent units. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 5810-5818. PMLR.

Jianan Zhao, Xiao Wang, Chuan Shi, Binbin Hu, Guojie Song, and Yanfang Ye. 2021a. Heterogeneous graph structure learning for graph neural networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 4697-4705. AAAI Press.

Tong Zhao, Yozen Liu, Leonardo Neves, Oliver J. Woodford, Meng Jiang, and Neil Shah. 2021b. Data augmentation for graph neural networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 11015-11023. AAAI Press.

Yuxiang Zhou, Lejian Liao, Yang Gao, Zhanming Jie, and Wei Lu. 2021. To be closer: Learning to link up aspects with opinions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 3899-3909. Association for Computational Linguistics.

Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Qiang Liu, Shu Wu, and Liang Wang. 2021. Deep graph structure learning for robust representations: A survey. CoRR, abs/2103.03036.

A Different Frequency Selection

We heuristically select spectral bands (HIGH, MID-HIGH, MID-LOW, LOW) to observe the impact of different spectral bands on structure induction for ABSA. The specific selection of spectral bands at different frequencies and their results are shown in Table 10. The range of spectral bands corresponds to the description in Table 5. Here, based on the distribution of sentence lengths in the dataset (refer to Figure 2), we set the maximum length (L) to

100 for each dataset and place sentences of similar length in one batch, with a batch size of 32. Each batch is batched according to the maximum sentence length in that batch. For simplicity, we did not design specific spectral bands for different sentence lengths. Instead, we set the spectral bands based on the maximum sentence length (L) in each dataset. We only change the hyperparameter 'Bands' settings, while all other settings remain the same. For specific experimental settings, refer to Section 4.2. It can be observed that different spectral band selections indeed lead to different results, and an appropriate heuristic spectral band selection can significantly improve the results.

B The Settings of AsD analysis

Here, we provide a detailed introduction to the relative distance calculation $dist(n_1, n_2)$ for AsD. For a given sentence $S_i$ , with its aspect words $A = \langle a_1, \dots, a_p \rangle$ , sentiment word set $C = \langle c_1, \dots, c_q \rangle$ , and the adjacency matrix $A_G$ of the induced graph structure, we calculate the shortest hops from $a_p$ to $c_q$ . If the value of the corresponding position of $a_p$ and $c_q$ on the adjacency matrix $A_G$ is greater than the threshold $\gamma$ , then we call the distance between $a_p$ and $c_q$ to be 1. Otherwise, finding the shortest hops between $a_p$ and $c_q$ on the $A_G$ as its shortest path. We also use $\gamma$ to judge whether there is an edge between two nodes. Here, $\gamma$ is set to the average value of all values of $A_G$ . If $a_p$ and $c_q$ are not directly connected, we set the distance between $a_p$ and $c_q$ to the maximum number of hops, where the maximum number of hops is set to 10.

C Automatic Frequency Selector (AFS)

Furthermore, it is not affirmed that information in just one band (e.g. HIGH, MID-HIGH, etc.) is helpful, and information in other bands may also provide a gaining effect. Therefore with this in mind, we introduce an Automatic Frequency Selector (AFS) to select helpful frequency components along with the optimization of the overall model.

To achieve this goal, we design the Frequency Selection operation under a probabilistic scenario $\Upsilon$ . To be specific, we map each frequency component $f$ into a Bernoulli parameter space by employing a Multi-layer Perceptron (MLP) architecture to parameterize this mapping process. Firstly, we bring in a set of learnable parameters $\xi \in \mathbb{R}^{k\times d_k}$ to parameterize frequency components, where $k$ denotes the number of frequency components, and $d_{k}$ de

notes the dimension of component representations. Then, we utilize the MLP architecture (composed of two linear projection layers $Proj_{1}$ and $Proj_{2}$ , and an activation function $\sigma$ (i.e. ReLU)) to map frequency components $\xi$ into the Bernoulli parameter space.

zB=MLP(ξ)=Proj2(σ(Proj1(ξ))),(9) z _ {B} = M L P (\xi) = \operatorname {P r o j} _ {2} \left(\sigma \left(\operatorname {P r o j} _ {1} (\xi)\right)\right), \tag {9}

ξB=φ((zBlog(log(ϵ)))/τ)(10) \xi_ {B} = \varphi \left(\left(z _ {B} - \log (- \log (\epsilon))\right) / \tau\right) \tag {10}

where $\xi_B$ denotes the success probabilities of Bernoulli distributions, and $\varphi$ denotes the Softmax function. We utilize the Gumbel reparameterization proposed by (Jang et al., 2017; Maddison et al., 2017) to address the differentiable difficulty during training, where $\epsilon \sim \mathcal{U}(0,1)$ is random variables of a uniform distribution on the interval $(0,1)$ . The hyperparameter $\tau \rightarrow 0$ is the annealing temperature, which is adjusted to zero progressively in practice. Next, we can obtain the values of Bernoulli random variables $m_B \sim \text{Bern}(\xi_B)$ , where $m_B \in {0,1}^k$ , and $B_{n}$ denotes Bernoulli distributions. During the non-training phase, we set a hyperparameter threshold $\gamma$ to control the sparsity of $m_B$ . (For the Rest14 dataset, we set the threshold $\gamma$ to 0.65. For the other two datasets, the threshold is set at 0.75.)

Subsequently, for the $i$ -th and $j$ -th word representations $\mathbf{h}_i \in \mathbb{R}^d$ and $\mathbf{h}j \in \mathbb{R}^d$ , we can calculate the pair-wise edge score $e{ij}$ as follows:

Φafs(x)=F1(Υ(F(x))),(11) \Phi^ {a f s} (x) = \mathcal {F} ^ {- 1} \left(\Upsilon (\mathcal {F} (x))\right), \tag {11}

eij=Φafs(Wihi)Φafs(Wjhj),(12) e _ {i j} = \Phi^ {a f s} (\mathbf {W} _ {i} \mathbf {h} _ {i}) \Phi^ {a f s} (\mathbf {W} _ {j} \mathbf {h} _ {j}) ^ {\top}, \quad (1 2)

where $\Upsilon$ indicates the Frequency Selection operation, and $\Phi^{afs}$ denotes the Automatic Frequency Selector (AFS). Subsequent operations are consistent with Section 3.1.

Table 10: Detailed results of the band impact based on RoBERTabase for heuristic frequency selection. For real sequence, the spectrum obtained by the Discrete Fourier Transform is symmetrical, so we only take half of the spectral bands for analysis. Negative values indicate that the frequency is selected from the high-frequency band, and positive values mean that the frequency is selected from the low-frequency band. Additionally, $x \rightarrow y$ means that the frequency selection is between the two values ( $x$ and $y$ ). The values in bold indicate superior performance compared to the Attn. method.

FilterBandsRest14Laptop14Twitter
AccuracyMacro-F1AccuracyMacro-F1AccuracyMacro-F1
HIGH-187.3280.7684.4881.5475.4374.88
-287.3280.7984.1781.1375.7274.45
-387.2381.5683.8681.2076.0174.34
-486.8880.4484.0181.3475.4374.78
-587.7781.6283.5480.5376.3075.00
-687.7781.7182.7679.9375.5874.34
-887.7780.7484.8082.2776.1675.52
-1087.0580.7983.8681.3775.5874.41
-1287.7780.7484.4881.3875.8774.45
-1487.7581.8684.8082.2175.4374.69
-1688.5782.9584.3281.8776.4575.46
-1886.4379.2683.5480.5475.4374.08
-2088.1382.3384.0181.0676.0175.23
-2288.5783.2784.4881.8275.5874.91
-2487.1480.6384.1781.6576.3075.18
-2687.5080.8584.6482.0476.0174.46
MID-HIGH8 → 1088.2182.4184.4881.9074.5774.19
8 → 1187.8681.6985.4283.0175.2974.59
8 → 1287.5080.6683.3980.4975.2974.68
8 → 1387.2380.1383.8681.0676.8875.70
8 → 1486.8880.7584.4881.7075.7274.90
8 → 1687.9581.6983.7080.9277.0275.84
8 → 1887.5082.1685.2782.6775.7274.48
8 → 2088.4883.3283.7080.8175.1473.63
8 → 2287.0579.8183.5480.5076.4575.16
8 → 2486.8880.5384.3381.6575.0073.62
MID-LOW4 → 586.9680.5084.0181.1476.4575.50
4 → 687.1480.4083.7081.0576.5975.61
4 → 787.1481.7184.3382.1077.0275.64
4 → 887.6881.9982.9279.7276.8775.82
LOW187.4181.2783.3980.4476.1675.03
287.8681.0683.3980.5576.1575.16
387.2380.5183.7080.8076.4574.90
486.9680.1484.0181.6475.8774.65
Attn.-87.5981.7283.8680.5375.7273.92