Buckets:

|
download
raw
53.8 kB

Title: Emphasising Structured Information: Integrating Abstract Meaning Representation into LLMs for Enhanced Open-Domain Dialogue Evaluation

URL Source: https://arxiv.org/html/2404.01129

Markdown Content: Bohao Yang 1, Kun Zhao 2††footnotemark: , Dong Liu 3, Chen Tang 4 , Liang Zhan 2, Chenghua Lin 1††footnotemark:

1 The University of Manchester 2 University of the Pittsburgh

3 Tencent Timi Studio 4 University of Surrey

dougliu@tencent.com {kun.zhao, liang.zhan}@pitt.edu

bohao.yang-2@postgrad.manchester.ac.uk chenghua.lin@manchester.ac.uk,

Abstract

Automatic open-domain dialogue evaluation has attracted increasing attention, yet remains challenging due to the complexity of assessing response appropriateness. Traditional evaluation metrics, typically trained with true positive and randomly selected negative responses, tend to assign higher scores to responses that share greater content similarity with contexts. However, adversarial negative responses, despite possessing high lexical overlap with contexts, can be semantically incongruous. Consequently, existing metrics struggle to effectively evaluate such responses, resulting in low correlations with human judgments. While recent studies have demonstrated the effectiveness of Large Language Models (LLMs) for open-domain dialogue evaluation, they still face challenges in handling adversarial negative examples. We propose a novel evaluation framework that integrates Abstract Meaning Representation (AMR) enhanced domain-specific language models (SLMs) with LLMs. Our SLMs explicitly incorporate AMR graph information through a gating mechanism for enhanced semantic representation learning, while both SLM predictions and AMR knowledge are integrated into LLM prompts for robust evaluation. Extensive experiments on open-domain dialogue evaluation tasks demonstrate the superiority of our method compared to state-of-the-art baselines. Our comprehensive ablation studies reveal that AMR graph information contributes substantially more to performance improvements. Our framework achieves strong correlations with human judgments across multiple datasets, establishing a new benchmark for dialogue evaluation. Our code and data are publicly available at https://github.com/Bernard-Yang/SIMAMR.

Emphasising Structured Information: Integrating Abstract Meaning Representation into LLMs for Enhanced Open-Domain Dialogue Evaluation

Bohao Yang 1††thanks: Equal contribution., Kun Zhao 2††footnotemark: , Dong Liu 3, Chen Tang 4 , Liang Zhan 2††thanks: Corresponding authors., Chenghua Lin 1††footnotemark: 1 The University of Manchester 2 University of the Pittsburgh 3 Tencent Timi Studio 4 University of Surrey dougliu@tencent.com {kun.zhao, liang.zhan}@pitt.edu bohao.yang-2@postgrad.manchester.ac.uk chenghua.lin@manchester.ac.uk,

1 Introduction

Open-domain dialogue systems have garnered substantial attention owing to their broad applicability Zhao et al. (2023); Liu et al. (2023) across various domains, including personal medical assistance and biomedical telecommunications(Sai et al., 2020; Yang et al., 2024b). Traditional evaluation approaches, such as n n-gram-based metrics(Papineni et al., 2002; Lin, 2004; Banerjee and Lavie, 2005) and embedding-based metrics(Zhang et al., 2020), assess the semantic similarity between response candidates and gold references. These methods correlate poorly with human evaluation due to their limited capacity to incorporate conversational context(Liu et al., 2016).

Image 1: Refer to caption

Figure 1: AMR graphs for the conversational context and response. The semantic relationship of the word “worth” appearing in both context and response is captured through distinct colored representations in their respective AMR graphs.

While recent advances in trainable evaluation frameworks(Lowe et al., 2017; Tao et al., 2018) have improved context-response relationship modeling, they face fundamental limitations stemming from their training . These models, typically trained with true positive and randomly sampled negative examples, tend to assess responses primarily through surface-level content similarity. Although some approaches have attempted to address this by incorporating adversarial examples(Sai et al., 2020; Gupta et al., 2021), they either require extensive pre-training on large-scale conversational corpora or demand adaptation to specific datasets, incurring substantial computational overhead. Moreover, their exclusive reliance on surface-form features compromises robustness when evaluating adversarial examples that deviate from the training distribution. The vulnerability to adversarial attacks further compounds this challenge. Jin et al. (2019) demonstrated that even simple synonym substitutions can lead to misclassification in text analysis tasks. For instance, a positive review stating “The characters, cast in impossibly contrived situations, are totally estranged from reality” would be misclassified as negative when minimally modified to “The characters, cast in impossibly engineered circumstances, are fully estranged from reality”, despite maintaining semantic equivalence.

Recent advances in Large Language Models (LLMs) have shown promise across a variety of tasks(Yang et al., 2023; Liu et al., 2023; Yang et al., 2025; Chiang and yi Lee, 2023). However, these models still exhibit suboptimal performance when evaluating adversarial negative responses. To address these limitations, we propose integrating LLMs with domain-specific language models (SLMs) enhanced by Abstract Meaning Representation (AMR) graph information, specifically aimed at improving evaluation robustness for adversarial examples. AMR graphs serve as powerful tools for capturing dialogue system states and providing complementary semantic knowledge(Bai et al., 2021; Bonial et al., 2020). Consider the following example: given the context “Would you recommend some places for sightseeing? How about great canyon? Is it worth seeing?”, and an adversarial negative response “The movie was really good, it was worth watching it”, existing metrics might erroneously classify this as positive due to lexical overlap. AMR graphs help address this by modeling semantic relationships between concepts (e.g., “worth” and “canyon”) through explicit edge relations (e.g., “:mod” and “:ARG1”).

Our approach introduces an AMR graph-enhanced SLM that effectively identifies adversarial negative examples in open-domain dialogue. The framework integrates both the SLM’s predictions and AMR graph information into the LLM’s prompt, creating a robust automatic evaluator that leverages domain-specific knowledge during inference. The SLM architecture comprises two key components: sentence and graph encoders. The sentence encoder processes surface-form knowledge from conversational contexts and responses, while the graph encoder models AMR structural information, capturing both conceptual elements and their interrelations. These complementary representations are unified through a sophisticated gating mechanism and optimised via contrastive learning, encouraging alignment between textual and structural features for positive context-response pairs. The final evaluation integrates both the SLM’s prediction score and AMR graph information into the LLM’s prompt.

Comprehensive empirical evaluation across three public datasets demonstrates our model’s superior performance compared to state-of-the-art baselines, including LLM-based methods. Our key contributions include:

Our contributions can be summarised as follows:

  • •A novel framework integrating AMR graph information into open-domain dialogue evaluation through a dual-representation approach that combines specialized SLMs with LLMs.
  • •A comprehensive evaluation methodology across four distinct criteria (Naturalness, Coherence, Engagingness, and Groundedness), with detailed performance breakdowns demonstrating consistent improvements across all dimensions.
  • •Extensive experimental results demonstrating substantial improvements over existing methods including reasoning-focused LLMs, with ablation studies revealing that AMR graph information contributes 7.4% more to performance than SLM score alone.

2 Related Work

Dialogue Evaluation Metrics.Traditional n n-gram-based metrics, including BLEU(Papineni et al., 2002), ROUGE(Lin, 2004), and METEOR(Banerjee and Lavie, 2005), compute lexical overlap between response candidates and gold references. More sophisticated embedding-based metrics, such as Extrema(Forgues and Pineau, 2014) and BERTScore(Zhang et al., 2020), first project responses and references into high-dimensional semantic spaces before calculating their similarity. However, both approaches have shown limited efficacy in evaluating open-domain dialogue systems(Liu et al., 2016).

Regarding trainable metrics, RUBER(Tao et al., 2018) evaluates response quality by measuring semantic similarity between the generated response, dialogue context, and ground truth reference. Sai et al. (2020) introduced DEB, which leverages a BERT model pre-trained on large-scale Reddit conversations. While effective, the computational cost of pre-training on extensive datasets makes this approach less practical. Similarly, Mask-and-fill(Gupta et al., 2021) employs a Speaker-Aware BERT architecture(Gu et al., 2020) to enhance dialogue understanding, though it requires dataset-specific adaptation before fine-tuning. Zhang et al. (2021) developed MDD-Eval for cross-domain dialogue evaluation, but this method necessitates human annotations and additional training data while failing to address adversarial negative examples.

LLM-based Evaluators.The emergence of Large Language Models (LLMs) has enabled new approaches to dialogue evaluation. Fu et al. (2023) developed GPTScore, leveraging pre-trained language models for multi-aspect, customizable evaluation without task-specific training. Wang et al. (2023) empirically validated the effectiveness of LLM-based evaluation approaches. Kocmi and Federmann (2023) demonstrated the utility of GPT models in machine translation evaluation. Liu et al. (2023) introduced G-Eval, employing GPT-4 across multiple generation tasks including dialogue response, text summarization, data-to-text generation, and machine translation. Chan et al. (2023) proposed ChatEval, a multi-agent debate framework that surpasses single-LLM evaluators in performance. However, these LLM-based approaches have yet to be applied to evaluating adversarial negative responses incorporating non-textual domain knowledge.

3 Methodology

Image 2: Refer to caption

Figure 2: The architecture of the proposed model. The left part is the SLM architecture, containing two encoders and the gate mechanism for encoding and fusing the sequence and AMR graph information of context-response pairs. The right part is the LLM where the prompt contains the prediction score of the SLM and AMR graph information.

3.1 Task Description

Our model operates on input tuples consisting of a dialogue context 𝒞\mathcal{C}, a response ℛ\mathcal{R}, and their corresponding AMR graphs 𝒢 𝒞\mathcal{G_{C}} and 𝒢 ℛ\mathcal{G_{R}}. The primary objective of the SLM component is to perform binary classification, predicting a label 𝒴∈{0,1}\mathcal{Y}\in{0,1} for each response, where 0 and 1 denote negative and positive responses, respectively.

The SLM generates a classification confidence score defined as:

Score SLM=P​(𝒴∣𝒞,ℛ,𝒢 𝒞,𝒢 ℛ)\mathrm{Score_{SLM}}=P(\mathcal{Y}\mid\mathcal{C},\mathcal{R},\mathcal{G_{C}},\mathcal{G_{R}})(1)

The derived confidence score, in conjunction with the semantic structural information encoded in AMR graphs 𝒢 𝒞\mathcal{G_{C}} and 𝒢 ℛ\mathcal{G_{R}}, is incorporated into the LLM’s prompt. This integration enables the LLM to leverage both statistical confidence and explicit semantic knowledge for more robust open-domain dialogue evaluation.

3.2 Overall Architecture

Figure2 illustrates the comprehensive architecture of our proposed framework, which seamlessly integrates SLM and LLM components. The SLM architecture incorporates a dual-encoder design: a sequence encoder for processing textual information and a graph encoder specialized in AMR graph representation learning. The complementary representations from these encoders are dynamically balanced through an adaptive gating mechanism, which modulates the information flow from both sources.

To optimise the alignment between textual and structural representations, particularly for positive response pairs, we employ a contrastive learning strategy during the training phase. This approach minimizes the representational distance between sentence and graph embeddings for semantically coherent pairs, while maintaining appropriate separation for negative examples.

The final evaluation framework leverages both the SLM’s classification confidence score Score SLM\mathrm{Score_{SLM}} and the structured AMR graph information, which are systematically integrated into the LLM’s prompt through a carefully designed template. This multi-modal integration enables the LLM to synthesize both statistical and semantic evidence for more robust dialogue evaluation.

The complementary nature of SLM and LLM integration stems from their distinct capabilities: while the SLM excels at encoding structured graph information through specialized transformers, LLMs offer superior contextual reasoning but lack native graph processing abilities. As shown in our attention analysis in AppendixA.2, the SLM’s graph encoder can identify semantic inconsistencies in adversarial examples that may be missed by text-only representations. By combining these approaches, our framework leverages both structured semantic knowledge and advanced reasoning capabilities.

3.3 Sequence Encoder

The sequence encoder employs a standard Transformer architecture(Vaswani et al., 2017) to process the input dialogue components. Given a dialogue context 𝒞 i={w 1,w 2,…,w 𝒞}\mathcal{C}{i}=\left{w{1},w_{2},\ldots,w_{\mathcal{C}}\right} and a response ℛ i={w 1,w 2,…,w ℛ}\mathcal{R}{i}=\left{w{1},w_{2},\ldots,w_{\mathcal{R}}\right}, where w i w_{i} denotes the i i-th token and 𝒞\mathcal{C}, ℛ\mathcal{R} represent respective sequence lengths, the encoder generates a sentence representation 𝐇 S\mathbf{H}_{S}. The encoding process can be formally expressed as:

𝐇 S\displaystyle\mathbf{H}{S}=SeqEncoder⁡(𝒞,ℛ)\displaystyle=\operatorname{SeqEncoder}(\mathcal{C,R})(2) h i\displaystyle h{i}=∑j=1 𝒞+ℛ α i​j​(W H​h j)\displaystyle=\sum_{j=1}^{\mathcal{C+R}}\alpha_{ij}\left(W^{H}h_{j}\right)(3) α i​j\displaystyle\alpha_{ij}=Attention⁡(h i,h j)\displaystyle=\operatorname{Attention}\left(h_{i},h_{j}\right)(4)

where 𝐇 S={h 1,h 2,…,h 𝒞+ℛ}\mathbf{H}{S}=\left{h{1},h_{2},\ldots,h_{\mathcal{C+R}}\right} represents the sequence of hidden states and W H W^{H} denotes the transformation matrix.

3.4 Graph Encoder

For modeling AMR graph structures, we utilise the Graph Transformer(Zhu et al., 2019), an extension of the standard Transformer that specialises in graph-structured data. An AMR graph 𝒢=⟨𝒱,ℰ⟩\mathcal{G}=\langle\mathcal{V},\mathcal{E}\rangle comprises nodes 𝒱\mathcal{V} and edges ℰ\mathcal{E}, where each edge e∈ℰ e\in\mathcal{E} is represented as a triple ⟨n i,r i​j,n j⟩\left\langle n_{i},r_{ij},n_{j}\right\rangle denoting the relation r i​j r_{ij} between nodes n i n_{i} and n j n_{j}. The graph encoding process is defined as:

𝐇 A\displaystyle\mathbf{H}{A}=GraphEncoder⁡(𝒱,ℰ)\displaystyle=\operatorname{GraphEncoder}(\mathcal{V},\mathcal{E})(5) h i′\displaystyle h^{\prime}{i}=∑j=1 M α^i​j​(W V​h j′+W R​𝒓 i​j)\displaystyle=\sum_{j=1}^{M}\hat{\alpha}{ij}\left(W^{V}h^{\prime}{j}+W^{R}\boldsymbol{r}_{ij}\right)(6)

where 𝐇 A={h 1′,h 2′,…,h M′}\mathbf{H}{A}=\left{h{1}^{\prime},h_{2}^{\prime},\ldots,h_{M}^{\prime}\right} represents the graph embeddings, and W V W^{V}, W R W^{R} are learnable transformation matrices.

The graph attention mechanism, which distinguishes the Graph Transformer from standard Transformers, is computed as:

α^i​j\displaystyle\hat{\alpha}{ij}=exp⁡(e^i​j)∑m=1 M exp⁡(e^i​m)\displaystyle=\frac{\exp\left(\hat{e}{ij}\right)}{\sum_{m=1}^{M}\exp\left(\hat{e}{im}\right)} e^i​j\displaystyle\hat{e}{ij}=(W Q​h i′)T​(W K​h j′+W R​𝒓 i​j)d\displaystyle=\frac{\left(W^{Q}h^{\prime}{i}\right)^{T}\left(W^{K}h^{\prime}{j}+W^{R}\boldsymbol{r}_{ij}\right)}{\sqrt{d}}(7)

where W Q W^{Q}, W K W^{K} are transformation matrices and d d is the dimensionality of the hidden states.

3.5 Aggregation Gate

To effectively combine the complementary information from both sequence and graph representations, we implement an adaptive gating mechanism. Given the sentence representation 𝐇 S\mathbf{H}{S} and graph representation 𝐇 A\mathbf{H}{A}, the gate value g i g_{i} is computed as:

g i\displaystyle g_{i}=σ​(W G​𝐇 S+b g)\displaystyle=\sigma\left(W^{G}\mathbf{H}{S}+b{g}\right)(8) 𝐇^\displaystyle\mathbf{\hat{H}}=g i​𝐇 S+(1−g i)​𝐇 A\displaystyle=g_{i}\mathbf{H}{S}+\left(1-g{i}\right)\mathbf{H}_{A}(9)

where W G W^{G}, b g b_{g} are learnable parameters, and 𝐇^\mathbf{\hat{H}} represents the final fused representation.

3.6 Training objectives and Evaluation

The fused representation 𝐇^\mathbf{\hat{H}} is used to predict the classification probability for the context-response pair:

Score SLM=softmax⁡(W F​𝐇^+b f)\displaystyle\mathrm{Score_{SLM}}=\operatorname{softmax}\left(W^{F}\mathbf{\hat{H}}+b_{f}\right)(10)

The training objective combines classification and contrastive learning:

ℒ\displaystyle\mathcal{L}=ℒ c​l​s+ℒ C\displaystyle=\mathcal{L}{cls}+\mathcal{L}{C}(11) ℒ c​l​s\displaystyle\mathcal{L}_{cls}=−log⁡P​(𝒴=1∣𝐇^)\displaystyle=-\mathcal{\log}P(\mathcal{Y}=1\mid\mathbf{\hat{H}})(12)

The contrastive loss ℒ C\mathcal{L}_{C} facilitates alignment between sentence and graph representations:

ℒ C=−1 N​∑i=1 N e sim⁡(𝐇 S+,𝐇 A+)∑j e sim⁡(𝐇 S−,𝐇 A−)\displaystyle\mathcal{L}{C}=-\frac{1}{N}\sum{i=1}^{N}\frac{e^{\operatorname{sim}\left(\mathbf{H}{S}^{+},\mathbf{H}{A}^{+}\right)}}{\sum_{j}e^{\operatorname{sim}\left(\mathbf{H}{S}^{-},\mathbf{H}{A}^{-}\right)}}(13)

where 𝐇 S+\mathbf{H}{S}^{+}, 𝐇 A+\mathbf{H}{A}^{+} denote positive pair representations and 𝐇 S−\mathbf{H}{S}^{-}, 𝐇 A−\mathbf{H}{A}^{-} represent negative pairs.

The final evaluation score integrates the SLM prediction score Score SLM\mathrm{Score_{SLM}} and AMR graph information 𝒢\mathcal{G} through the LLM’s prompt.

Score=LLMs​(Score SLM,𝒢)\displaystyle\mathrm{Score}=\mathrm{LLMs}(\mathrm{Score_{SLM}},\mathcal{G})(14)

[ht]

Table 1: Pearson and Spearman correlations with human judgments on the DailyDialog++ dataset. The number figures in parentheses are p-values.

4 Experiments

4.1 Dataset

We conduct experiments on three widely-recognised open-domain dialogue datasets: DailyDialog++(Sai et al., 2020), PersonaChat(Zhang et al., 2018), and TopicalChat Gopalakrishnan et al. (2019). DailyDialog++ is particularly noteworthy as it is the sole publicly available dataset containing human-crafted adversarial negative responses. Each context is paired with three types responses: five positive responses, five random negative responses, and five adversarial negative responses.

For PersonaChat and TopicalChat, which lack human-created adversarial responses in their original forms, we utilise the augmented datasets from(Zhao et al., 2024). These enhanced datasets feature 2,000 conversational contexts, each accompanied by five positive responses and adversarial negative counterparts.

4.2 Experimental Settings

The preprocessing of AMR graph structures involves multiple stages. Initially, we employ the amrlib library(Cai and Lam, 2020) to transform each context-response pair into its corresponding AMR graph representation. Following the methodology outlined in(Song et al., 2020), we subsequently process these graphs using the AMR simplifier(Konstas et al., 2017). This procedure include the error-checking and therefore yields refined and accurate AMR graphs. For the LLM component, we utilise GPT-3.5-turbo and GPT-4-1106. The SLM is trained on the DailyDialog++ dataset, which comprises 9,259 dialogue contexts in the training set, 1,028 in the validation set, and 1,142 in the test set.

[b]

Table 2: Pearson and Spearman correlations with human judgments on the PersonaChat dataset.

[b]

Table 3: Pearson and Spearman correlations with human judgments on the TopicalChat dataset.

4.3 Baselines

For the word-overlap and embedding-based metrics, we select widely used ones in generative dialogue systems, including BLEU(Papineni et al., 2002), ROUGE(Lin, 2004), METEOR(Banerjee and Lavie, 2005), and BERTScore(Zhang et al., 2020). For the learning-based metrics, We compare our method with DEB Sai et al. (2020), USR Mehri and Eskenazi (2020), Mask-and-fill(Gupta et al., 2021), and MDD-Eval(Zhang et al., 2021). Additionally, we select G-Eval(Liu et al., 2023), QWQ-32B Team (2025), Qwen2.5-7B Yang et al. (2024a), and LLM-Eval Lin and Chen (2023) as the LLM-based metrics. For Qwen2.5-7B, we fine-tuned it on 12,000 both text and AMR structured dialogue examples from all three datasets, ensuring no overlap with evaluation sets.

4.4 Evaluation Set and Human Annotation

To rigorously assess our proposed metric, we establish a comprehensive evaluation protocol comprising two distinct sets: a Standard Set and an Adversarial Set.

Dataset Construction The Standard Set encompasses positive and random negative responses, with 400 context-response pairs sourced from each of DailyDialog++, PersonaChat, and TopicalChat datasets, totalling 1,200 samples. The random negative responses are selected from different dialogue turns to ensure contextual diversity. The Adversarial Set, designed to evaluate robustness against challenging examples, contains an additional 400 context-response pairs per dataset, featuring positive and adversarial negative responses. In aggregate, our evaluation corpus comprises 2,400 context-response pairs.

Correlation Computation For reporting our experimental results, we compute correlation between automated scores and human judgments separately for each of the four criteria (Naturalness, Coherence, Engagingness, and Groundedness). The reported values in Tables1-3 represent the average correlations across all four dimensions. This approach follows standard practices in dialogue evaluation research(Mehri and Eskenazi, 2020). A detailed breakdown of performance across individual criteria is provided in AppendixC.

Human Annotation Three qualified human evaluators, each holding at least a master’s degree in Computer Science and demonstrating full professional English proficiency, independently rated each context-response pair. Assessments were conducted using a 5-point Likert scale, where higher scores indicate superior quality. The final human annotation score for each aspect was derived by averaging across all evaluators. To ensure annotation reliability, we computed the Inner-Annotator Agreement (IAA) using Cohen’s Kappa coefficient(Cohen, 1960). The achieved average IAA score of 0.64 between annotator pairs indicates substantial agreement (0.6-0.8), validating the robustness of our human evaluation framework.

5 Results

5.1 Evaluation Performance on Standard Set

We evaluate our model against the baselines by analysing the correlation between automated evaluation scores and human judgements across three datasets. The results presented in Table1 to Table3 reveal that n n-gram and embedding-based baselines, which compute word overlap or semantic similarity between gold references and responses, demonstrate weak positive correlations with human annotations across two datasets. Amongst the n n-gram baselines, ROUGE-L exhibits the strongest correlation. The embedding-based approach, BERTScore, whilst outperforming the n n-gram baselines, still achieves suboptimal performance when compared with more sophisticated metrics. Learning-based metrics, which consider the contextual relationship between dialogue pairs, demonstrate superior overall performance. Specifically, Mask-and-fill and USR achieve better correlations than n n-gram baselines, whilst DEB and MDD-Eval secure higher correlations among these approaches.

Regarding LLM-based methods, G-Eval and LLM-Eval demonstrate strong performance across all three datasets. We also evaluated reasoning-focused LLMs including QwQ-32B (via direct prompting without AMR) and Qwen2.5-7B (fine-tuned with structured data for 5 epochs). These models perform slightly better than GPT-3.5 across all datasets. Similarly, the fine-tuned Qwen2.5-7B (0.3687/0.3702) outperforms GPT-3.5, demonstrating the potential of specialized reasoning models.

Our method in its basic configuration (Ours w/o LLM) achieves moderately positive correlations across the three datasets (less than 0.4). However, when integrating SLM with LLM, our approach achieves the highest overall performance on both Pearson and Spearman correlations across all datasets. Notably, our GPT-4 variant exhibits superior performance compared to all baselines, including the reasoning-focused LLMs. Through ablation studies examining the effectiveness of SLM and AMR graphs, we observe that Ours (w/o SLM) outperforms Ours (w/o AMR), which combines only LLM and SLM components, thereby validating the effectiveness of incorporating AMR graphs in open-domain dialogue evaluation.

5.2 Evaluation Performance on Adversarial Set

To evaluate our method’s capability in evaluating adversarial negative examples, we conduct comparative analyses against baseline approaches on the adversarial set. Tables1 to3 present the correlation results between automated metrics and human judgements. The n n-gram and embedding-based metrics exhibit weakly positive correlations with human judgements, primarily due to their inherent limitation of solely comparing gold references with response candidates, without considering the contextual relationships that characterise adversarial examples. Regarding learning-based approaches, USR demonstrates limited robustness against adversarial negative examples, showing only weak positive correlations with human judgements. In contrast, MDD-Eval, Mask-and-fill, and DEB achieve notably stronger performance across both Pearson and Spearman correlations.

LLM-based methods establish themselves as the strongest baseline approaches, with reasoning-focused models like QwQ-32B and fine-tuned Qwen2.5-7B showing improved performance over standard GPT-3.5. However, despite these improvements, these reasoning-focused LLMs still fall short of our full approach, suggesting that explicit semantic structure through AMR graphs provides complementary information that enhances evaluation capabilities beyond what these models can derive from text alone.

Our proposed metric consistently surpasses all baseline approaches across both correlation metrics. Specifically, Ours(GPT-4) achieves strong correlations on the adversarial set, significantly outperforming the strongest baseline G-Eval. Similar improvements are observed in Spearman correlations across the three datasets. The ablation analysis further substantiates the benefits of our integrated approach: Ours(w/o AMR) shows notably lower correlations, demonstrating that the incorporation of AMR graph information significantly enhances the model’s ability to evaluate adversarial examples. These results comprehensively validate the effectiveness of integrating AMR graph-enhanced SLM with LLMs for robust open-domain dialogue evaluation.

5.3 Ablation Study

We evaluate our SLM’s classification performance on the DailyDialog++ testset. As shown in Table5, our SLM surpasses all baselines and demonstrating the effectiveness of incorporating AMR graph information. Ablation studies reveal that removing either the Graph Transformer or Sentence Transformer components of SLM leads to decreased performance, with the Graph Transformer alone performing marginally better than the Sentence Transformer. While removing the contrastive learning (CL) or gating mechanism (GM) shows minimal impact, the removal of AMR information results in the most significant performance drop, highlighting its crucial role in dialogue evaluation.

When comparing Ours (w/o AMR) and Ours (w/o SLM) variants, we observe that removing AMR graph information leads to a more significant performance drop than removing the SLM score, confirming that the structured semantic knowledge encoded in AMR graphs contributes more to performance improvements. However, the full model combining both components achieves the best results, demonstrating that the SLM’s specialized architecture for processing graph information and the LLM’s reasoning capabilities operate synergistically rather than redundantly.

6 Conclusion

In this paper, we presents a novel evaluation framework for open-domain dialogue systems that integrates AMR graph-enhanced SLMs with LLMs. Comprehensive experimental results across multiple datasets demonstrate that our method consistently outperforms existing approaches, including state-of-the-art LLM-based methods, in the challenging task of open-domain dialogue evaluation.

Ethics Statement

Our proposed evaluation metric enhances the assessment of open-domain dialogue systems through AMR integration and contrastive learning. While the framework effectively addresses the one-to-many nature of dialogue evaluation, it may occasionally assign high scores to inappropriate responses. We recommend careful screening of training data and implementation of content filters before deployment.

Limitations

Despite demonstrating robust performance, our method primarily focuses on semantic dependencies between context and response. Following Howcroft et al. (2020), we acknowledge that human evaluation encompasses multiple attributes beyond semantic relationships. Future work should explore disentangling these various attributes to enhance model interpretability and evaluation comprehensiveness.

References

  • Bai et al. (2021) Xuefeng Bai, Yulong Chen, Linfeng Song, and Yue Zhang. 2021. Semantic representation for dialogue modeling. ArXiv, abs/2105.10188.
  • Banerjee and Lavie (2005) Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In IEEvaluation@ACL.
  • Bonial et al. (2020) Claire Bonial, L.Donatelli, Mitchell Abrams, Stephanie M. Lukin, Stephen Tratz, Matthew Marge, Ron Artstein, David R. Traum, and Clare R. Voss. 2020. Dialogue-amr: Abstract meaning representation for dialogue. In International Conference on Language Resources and Evaluation.
  • Cai and Lam (2020) Deng Cai and Wai Lam. 2020. Amr parsing via graph-sequence iterative inference. ArXiv, abs/2004.05572.
  • Chan et al. (2023) Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201.
  • Chiang and yi Lee (2023) Cheng-Han Chiang and Hung yi Lee. 2023. Can large language models be an alternative to human evaluations?In Annual Meeting of the Association for Computational Linguistics.
  • Cohen (1960) Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37–46.
  • Forgues and Pineau (2014) Gabriel Forgues and Joelle Pineau. 2014. Bootstrapping dialog systems with word embeddings.
  • Fu et al. (2023) Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. ArXiv, abs/2302.04166.
  • Gopalakrishnan et al. (2019) Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tür. 2019. Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations. In INTERSPEECH.
  • Gu et al. (2020) Jia-Chen Gu, Tianda Li, Quan Liu, Xiaodan Zhu, Zhenhua Ling, Zhiming Su, and Si Wei. 2020. Speaker-aware bert for multi-turn response selection in retrieval-based chatbots. Proceedings of the 29th ACM International Conference on Information & Knowledge Management.
  • Gupta et al. (2021) Prakhar Gupta, Yulia Tsvetkov, and Jeffrey P. Bigham. 2021. Synthesizing adversarial negative responses for robust response ranking and evaluation. In Findings.
  • Howcroft et al. (2020) David M. Howcroft, Anya Belz, Miruna Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: Nlg needs evaluation sheets and standardised definitions. In INLG.
  • Jin et al. (2019) Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In AAAI Conference on Artificial Intelligence.
  • Kocmi and Federmann (2023) Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality. In European Association for Machine Translation Conferences/Workshops.
  • Konstas et al. (2017) Ioannis Konstas, Srini Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and generation. In Annual Meeting of the Association for Computational Linguistics.
  • Lin (2004) Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Annual Meeting of the Association for Computational Linguistics.
  • Lin and Chen (2023) Yen-Ting Lin and Yun-Nung Chen. 2023. LLM-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. In Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023), pages 47–58, Toronto, Canada. Association for Computational Linguistics.
  • Liu et al. (2016) Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. ArXiv, abs/1603.08023.
  • Liu et al. (2023) Yang Liu, Dan Iter, Yichong Xu, Shuo Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: Nlg evaluation using gpt-4 with better human alignment. ArXiv, abs/2303.16634.
  • Lowe et al. (2017) Ryan Lowe, Michael Noseworthy, Iulian Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. ArXiv, abs/1708.07149.
  • Mehri and Eskenazi (2020) Shikib Mehri and Maxine Eskenazi. 2020. USR: An unsupervised and reference free evaluation metric for dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 681–707, Online. Association for Computational Linguistics.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Annual Meeting of the Association for Computational Linguistics.
  • Sai et al. (2020) Ananya B. Sai, Akash Kumar Mohankumar, Siddharth Arora, and Mitesh M. Khapra. 2020. Improving dialog evaluation with a multi-reference adversarial dataset and large scale pretraining. Transactions of the Association for Computational Linguistics, 8:810–827.
  • Song et al. (2020) Linfeng Song, Ante Wang, Jinsong Su, Yue Zhang, Kun Xu, Yubin Ge, and Dong Yu. 2020. Structural information preserving for graph-to-text generation. ArXiv, abs/2102.06749.
  • Tao et al. (2018) Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems. In AAAI.
  • Team (2025) Qwen Team. 2025. Qwq-32b: Embracing the power of reinforcement learning.
  • Vaswani et al. (2017) Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
  • Wang et al. (2023) Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good nlg evaluator? a preliminary study. ArXiv, abs/2303.04048.
  • Yang et al. (2024a) An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2024a. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115.
  • Yang et al. (2024b) Bohao Yang, Dong Liu, Chenghao Xiao, Kun Zhao, Chen Tang, Chao Li, Lin Yuan, Guang Yang, Lanxiao Huang, and Chenghua Lin. 2024b. Crafting customisable characters with llms: Introducing simschat, a persona-driven role-playing agent framework. arXiv preprint arXiv:2406.17962.
  • Yang et al. (2023) Bohao Yang, Chen Tang, Kun Zhao, Chenghao Xiao, and Chenghua Lin. 2023. Effective distillation of table-based reasoning ability from llms. arXiv preprint arXiv:2309.13182.
  • Yang et al. (2025) Bohao Yang, Yingji Zhang, Dong Liu, André Freitas, and Chenghua Lin. 2025. Does table source matter? benchmarking and improving multimodal scientific table understanding and reasoning. arXiv preprint arXiv:2501.13042.
  • Zhang et al. (2021) Chen Zhang, L.F. D’Haro, Thomas Friedrichs, and Haizhou Li. 2021. Mdd-eval: Self-training on augmented data for multi-domain dialogue evaluation. In AAAI Conference on Artificial Intelligence.
  • Zhang et al. (2018) Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too?In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics.
  • Zhang et al. (2020) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. ArXiv, abs/1904.09675.
  • Zhao et al. (2023) Kun Zhao, Bohao Yang, Chenghua Lin, Wenge Rong, Aline Villavicencio, and Xiaohui Cui. 2023. Evaluating open-domain dialogues in latent space with next sentence prediction and mutual information. arXiv preprint arXiv:2305.16967.
  • Zhao et al. (2024) Kun Zhao, Bohao Yang, Chen Tang, Chenghua Lin, and Liang Zhan. 2024. Slide: A framework integrating small and large language models for open-domain dialogues evaluation. arXiv preprint arXiv:2405.15924.
  • Zhu et al. (2019) Jiehan Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, and Guodong Zhou. 2019. Modeling graph structure in transformer for better amr-to-text generation. In Conference on Empirical Methods in Natural Language Processing.

Appendix A More Experimental Results and Analysis

A.1 Case Study

To demonstrate the effectiveness of AMR graphs in identifying adversarial negative responses, we present several illustrative examples in Table4. These cases highlight instances where responses were incorrectly classified as “positive” without AMR graph information, but were accurately identified as “negative” when incorporating semantic structural knowledge from AMR graphs. This analysis underscores the crucial role of AMR-derived semantic information in enhancing the model’s discriminative capability for challenging adversarial examples.

Table 4: Samples of context-response pairs. The bold words represent the overlapping words.

A.2 Attention Visualisation Analysis

We analyse the attention patterns of both Sentence and Graph Transformers of the SLM through visualisation of their attention heatmaps for the context-response pair shown in Figure3.

The Sentence Transformer exhibits strong attention weights between overlapping tokens in context and response. As illustrated in Figure3 (top), tokens such as “school” and “friend” in the response show high attention scores with their counterparts “school” and “girlfriend” in the context. In contrast, the Graph Transformer, which incorporates entity relationships through AMR structures, demonstrates different attention patterns. Figure3 (bottom) shows that these lexically similar tokens receive lower attention weights, indicating the model’s ability to capture semantic differences beyond surface-level similarities.

Table 5: Ablation study on Dailydialog++ dataset.

Appendix B Prompt Templates

B.1 Prompt for Engagingness evaluation

Rate the dialogue response.

Use the prediction probability from the SLMs and AMR graphs of the conversation pair to aid your judgment.

Note: Please take the time to fully read and understand the dialogue response.

How dull/interest is the text of the dialogue response? (on a scale of 1-5, with 1 being the lowest)

Input:

Conversation Context:

Response:

AMR Graph:

SLM score: Evaluation Form (Score ONLY):

Engagingness:

B.2 Prompt for Naturalness evaluation

Rate the dialogue response.

Use the prediction probability from the SLMs and AMR graphs of the conversation pair to aid your judgment.

Note: Please take the time to fully read and understand the dialogue response.

To what extent the response is naturally written (on a scale of 1-5, with 1 being the lowest)

Input:

Conversation Context:

Response:

AMR Graph:

SLM score: Evaluation Form (Score ONLY):

Naturalness:

B.3 Prompt for Coherence evaluation

Rate the dialogue response.

Use the prediction probability from the SLMs and AMR graphs of the conversation pair to aid your judgment.

Note: Please take the time to fully read and understand the dialogue response.

To what extent the response is well-structured, logical, and meaningful (on a scale of 1-5, with 1 being the lowest)

Input:

Conversation Context:

Response:

AMR Graph:

SLM score: Evaluation Form (Score ONLY):

Coherence:

B.4 Prompt for Groundedness evaluation

Rate the dialogue response.

Use the prediction probability from the SLMs and AMR graphs of the conversation pair to aid your judgment.

Note: Please take the time to fully read and understand the dialogue response.

To what extent the response is grounded in facts present in the context (on a scale of 1-5, with 1 being the lowest)

Input:

Conversation Context:

Response:

AMR Graph:

SLM score: Evaluation Form (Score ONLY):

Groundedness:

Image 3: Refer to caption

Image 4: Refer to caption

Figure 3: Attention pattern visualisation for context-response analysis. Top: Graph Transformer attention heatmap showing semantic-aware attention distribution. Bottom: Sentence Transformer attention heatmap highlighting lexical-level attention patterns. Overlapping tokens between context and response (friends and school) demonstrate distinct attention behaviours in the two encoders.

[ht]

Table 6: Breakdown of Pearson and Spearman correlations with human judgments by evaluation criteria on the DailyDialog++ dataset.

[ht]

Table 7: Breakdown of Pearson and Spearman correlations with human judgments by evaluation criteria on the PersonaChat dataset.

[ht]

Table 8: Breakdown of Pearson and Spearman correlations with human judgments by evaluation criteria on the TopicalChat dataset.

Appendix C Performance Breakdown by Evaluation Criteria

Tables6, 7, and 8 present the correlation results broken down by individual evaluation criteria (Naturalness, Coherence, Engagingness, and Groundedness) for each dataset. This detailed analysis reveals that our method consistently outperforms baselines across all criteria, with particularly notable improvements in Coherence and Groundedness for adversarial examples. This pattern aligns with our expectation that AMR graph information would be especially beneficial for capturing semantic inconsistencies that affect contextual appropriateness.

Appendix D Example Prompt with AMR Graph and SLM Score

In TableTable 9, we provide a concrete example of how the SLM score and AMR graph information are incorporated into the LLM prompt for evaluation.

Table 9: Example of prompt template showing how SLM score and AMR graph information are integrated to evaluate dialogue response coherence.

Xet Storage Details

Size:
53.8 kB
·
Xet hash:
ab6c1ab5f65bdcf157cbde5c422b8facd145adbcac0c41dfa58016d327da0466

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.