article
stringlengths
507
295k
abstract
stringlengths
417
1.92k
category
listlengths
1
6
# 1 Introduction Relational databases serve as the foundation for data management, supported by decades of mature infrastructure development and a wide array of sophisticated analytical tools. However, much of today’s data exists as raw, unstructured text – such as academic articles, medical records, and business reports (Harbert, n.d.). This unstructured data cannot be directly analyzed using conventional database tools, which rely on structured, relational inputs. Bridging this gap remains a long-standing goal of the data management community (Mansuri and Sarawagi, 2006; Smith et al., 2022a; Chu et al., 2007; Yafooz et al., 2013; Michelson and Knoblock, 2008; Murthy et al., 2012; Jain et al., 2007), with a key challenge being the conversion of unstructured text into queryable, structured formats compatible with existing relational database infrastructure. Large language models (LLMs) presents a unique opportunity to automate this conversion, owing to their growing capability to understand natural language and perform complex information extraction tasks. Prior work in this space can be broadly categorized into two areas. The first focuses on generating summarizing structures from text, such as tables (Deng et al., 2024; Wu SQL code TXT document An error INSERT INTO Traveler oscycnutrarxed: Sophia was Direct $\Rightarrow$ VALUES (1, error visiting prompting for X Incomplete SQL synthesizing JRuonmee1o0tnh. James also rdealtatiboansael $\Rightarrow$ VALUES (1, 'Sophia'); INSERT INTO Traveler Q Mviaslsuiensg: James booked a X Missing values tRoourmoef on LM $\Rightarrow$ ICNRT,EeATmEa lTTAEBXLTE)Traveler (id Hscaolpmlhuicai@ngamtaeidl:. VALUES X Hallucinated (1,'sophia@gmail.com'); values et al., 2022; Sundar et al., 2024; Li et al., 2023; Arora et al., 2023) and mind maps (Jain et al., 2024)—but these non-relational representations are often tailored for specific downstream applications (Shavarani and Sarkar, 2025; Sui et al., 2024), and lack the expressiveness and semantics of relational databases. The second category manipulates a pre-defined and fully populated relational database—e.g., Text-to-SQL (Hong et al., 2024) approaches generate executable SQL queries from text over given schemas, while a recent work can update existing relational databases using text input (Jiao et al., 2024). However, a key challenge of managing unstructured text is precisely that such a pre-defined database often does not exist. In this paper, we pursue a more ambitious goal – synthesizing a relational database from unstructured text from scratch—a task that we call Text2R. The Text2R task presents several unique challenges. First, a relational schema consists of multiple interrelated tables that capture complex entityrelationship semantics, and it must also preserve syntactic integrity, such as satisfying primary/foreign key constraints. Second, database records must be correctly identified and populated across tables. This involves ensuring value consistency – e.g., the same entity must be consistently represented in all relevant tables. Third, the actual database creation requires valid and executable SQL statements, adding another layer of complexity. Naïve approaches, such as directly prompting LLMs to synthesize databases, leads to diverse er Unstructured texts CoT based prompting Relational Database aopwR ipaotgtchuemkidBead fgeoesdrnt. tCJoShiutuheynreTpeorwfe1uarR0mstohuv.maimsnei ding ITCINDaiDtab-y-lmPePe:K,\*DAegsteination TSDrteuarsrvatiet_nildoearant tIoeDn -IDF -K\*F\* K\*\* traveler: 1,"Sophia", 34 CInRNaESmAEeTR,ET TIgANeBT)LOEV trLraUavEveSelle(er1r, …id, also visiting Rome on destination: 1, Rome June 10th… (TraveleTrr,iNApaglme,t gS4eo)npehria)tion trip: 12, 1, “6/10” ItVNrASaLEvUReETlS Ir(N_1Ti,Od t.,r.i.p). (;id, LLM (Traveler, NAagmee,, J9a)mes) Symbolic (Destination, City, Rome) (3) Table Population (4) Database Materialization tool (Trip, Start Date, 6/10) (2) Value Identification rors, including missing or hallucinated values, and SQL syntax issues (Fig. 1). To address these challenges, we propose SQUiD1, a neurosymbolic framework for the Text2R task. Our key idea is to decompose the task into multiple modular stages in a principled manner—breaking the problem into manageable sub-tasks. This allows each stage to leverage specialized techniques, such as symbolic information extraction and LLM-assisted tool use, for improved performance. Via task breakdown, some stages can also be executed programmatically, enhancing both accuracy and consistency. Additionally, each stage incorporates best practices from relational database literature to guide prompt design. SQUiD consists of four stages, which generalize across text from diverse domains. The schema generation stage uses LLMs to infer a relational schema from the input text, guided by carefully designed prompts that incorporate best practices to identify entities and relationships. In the value identification stage, intermediate representations in the form of triplets are extracted using both symbolic tools and LLMs. These triplets break down complex sentences into granular units, improving coverage of the extracted values. Next, the table population stage aligns these triplets with the generated schema to form schema-consistent tuples. Finally, instead of generating SQL directly via LLMs—which can be token-intensive—our database materialization stage programmatically translates the structured outputs into valid SQL statements, ensuring syntactic correctness and structural fidelity. The resulting SQL is then executed to instantiate the final database. We make the following contributions: • We define a new task – synthesizing relational databases from unstructured text, or Text2R. This marks a clear departure from prior work, which focuses on downstream relational tasks (e.g., Text2SQL), assuming a pre-existing database. • We propose SQUiD, a novel neurosymbolic framework for Text2R, based on a four-stage decomposition. Each stage leverages custom techniques tailored to its specific subtask. • We establish an automated benchmark methodology for Text2R. We also define a suite of evaluation metrics to assess schema and tuple quality along both semantic and syntactic dimensions. • We conduct extensive experiments across diverse text domains and show that SQUiD consistently outperforms direct prompting baselines. # 2 The Text2R Task We begin by defining this new task of relational database synthesis, or Text2R. Given an unstructured document $D$ of natural language text, the goal is to produce a set of SQL statements $S$ : (1) CREATE TABLE statements which define the schema $\mathcal { R }$ , specifying the structure of the database in terms of tables and columns; and (2) INSERT statements which populate the relations with data extracted from the text in $D$ . The schema $\mathcal { R }$ consists of a set of tables $\mathbf { T } = \{ T _ { 1 } , T _ { 2 } , \dots , T _ { n } \}$ where each $T _ { i }$ has a set of columns $\mathbf { C } _ { i } = \{ C _ { i , 1 } , C _ { i , 2 } , . . . , C _ { i , k _ { i } } \}$ Each table corresponds to an entity type, and the tables are inter-related, organizing the extracted tuples from the text into a database. A tuple $t$ for table $T _ { i }$ is represented as: $t = \langle v _ { 1 } , v _ { 2 } , \ldots , v _ { k _ { i } } \rangle$ where $v _ { j }$ is the value corresponding to column $C _ { i j } \in T _ { i }$ . Each tuple represents a unique instance of the entity described by $T _ { i }$ . Fig 3 illustrates the differences between Text2R and other tasks. # 3 SQUiD Framework SQUiD decomposes the Text2R task into four modular stages that mirror the typical database construction process. First, a relational database schema is designed by identifying the domain’s entities and relationships—this is the schema generation stage. TableEcitntnds T3 prompts rSoopmhfiao,r tahge du3r4a,t iownasofvihsietrinsgtaRy,owmheicohnwJouunled 1s0eth.eSrhbeahckadabtotoakleodf \$a1h,0ot0e0l. Evaporate prompts ICnocnotrraiencst caosliunmglne-vtalbule assignment ❌ LCaocnktsa rneslamtiuolntisphliepisncboethwereent tables Structsum prompts Duration Statate ❌ Does not support SQL ↓ Destinaion TripDuaion A ❌ ICnocnotrraiencst caosliunmglne-vtalbule assignment 7ddys June 10 024 \$1000o s20 Ti James 29 Rome Next, SQUiD extracts all the relevant values from the text (value identification), which are then used to construct tuples (table population). Finally, the generated schema and tuples are translated into valid SQL statements during the database materialization stage. We describe these stages below, using the following text shown in Fig. 2 as a running example: “Sophia booked a guided tour of Rome with BestCityTours, and opted for the premium package. She was visiting Rome on June 10th. James, aged 29, was also visiting Rome on June 10th.” # 3.1 Schema Generation Challenge. The complexity of schema generation is both semantic and syntactic. Semantically, the schema must accurately capture the entityrelationship structure that reflects the underlying data. Syntactically, a valid schema must comply with the integrity constraints defined by the established principles of relational databases. Simply prompting LLMs to generate a schema without explicitly articulating the necessary relational database constraints can result in structurally invalid outputs, as illustrated in Fig. 4. Invalid Schema Valid Schema Table: Traveler Table: Traveler PK→ ID Name Age PK→ ID Name Age 1 Sophia 34 1 Sophia 34 2 James 29 FK 2 James 29 Table: Trip Table: Trip X No relationship PK→ ID Destination between tables PK→ ID Traveler ID Destination 1 Rome 1 1 Rome 2 2 Rome Approach. The novelty of our approach is to encode a standardized set of rules that reflect the best practices in relational database literature, effectively guiding the model through a structured design process. These rules cover: (1) identifying relevant entities and relationships, (2) defining tables with appropriate columns, (3) assigning primary and foreign keys, and (4) avoiding reserved SQL keywords in naming tables/columns. We encode these rules into two types of prompt strategies: direct, and chain-of-thought (CoT) prompting. CoT decomposes schema generation into intermediate reasoning steps (e.g., entity identification, then table and key definition; see Appendix G). Decoupling schema generation from tuple formation has another advantage – it allows schema validity to be evaluated in isolation. This modularity is essential for enforcing syntactic constraints: each table must define a primary key (a column, or set of columns that uniquely identifies each row); and tables should include foreign keys (columns referencing primary keys in other tables). These constraints capture relationships between tables and enable JOIN operations. # 3.2 Value Identification Challenge. This stage identifies and extracts values from the text that correspond to columns across all tables in the schema, presenting two challenges. First, multiple values often need to be extracted and deduplicated from the input to form a complete tuple (i.e., an entity instance). In our example, “Sophia booked a guided tour of Rome with BestCityTours, and opted for the premium package. She was visiting Rome on June 10th.”, we must recover several values, such as traveler name ("Sophia"), tour location ("Rome"), tour operator ("BestCityTours"), and date ("June 10th"); redundant mentions (e.g. "Rome") need to be detected and deduplicated. Second, a document may describe multiple instances of the same type of entity, so we need to assign each value to the correct tuple. For instance, in the passage we also have: “James, aged 29, was also visiting Rome on June 10th.” Hence, we need to track that Sophia and James are different tourists, and form distinct tuples. Approach. Our neurosymbolic approach first augments direct LLM prompting with two information extraction (IE) methods to isolate values in a structured format, and then guides the LLM to accurately group these values by tuples. Triplet Generation. This step introduces an intermediate representation using triplets, a format commonly used in information extraction. Specifically, we consider two triplet formats: • Symbolic triplets, in the form (subject, relation, object)—e.g., (Sophia, visiting, Rome), extracted symbolically using the Stanford CoreNLP toolkit (Manning et al., 2014). • Schema-aligned triplets, in the form (table column, value)—e.g., (Tour, Location, Rome), generated using prompt-based LLM extraction for the target schema (see Appendix G). For instance, the earlier passage describing Sophia might yield the following schema-aligned triplets: We consider these two types of triplets because each captures complementary sets of values. Symbolic tools use deterministic methods to parse the text, and often extract values that LLMs may overlook (e.g. modifier words like premium). In contrast, LLM-generated schema-aligned triplets are more structurally consistent with the database schema, (e.g., Location $$ Rome). To ensure comprehensive coverage, we additionally leverage part-of-speech (POS) tagging to identify all nouns, pronouns, and numerical tokens in the text, since these POS categories typically encompass most values. We then perform string matching to verify whether the extracted triplets include all such tokens. If any are missing, the LLM is prompted to augment the existing triplets by incorporating the missing POS tokens. Triplet Deduplication. Both triplet generation methods often introduce redundancy. To reduce this, we use the "sentence-t5-base" model (Ni et al., 2021) to generate embeddings of the triplets and apply cosine similarity to identify near-duplicates. If a set of triplets has a pairwise cosine similarity above a tunable threshold $( 9 7 \% )$ , we retain only one representative triplet. Triplet Grouping. To ensure that triplets are correctly grouped by entity instance, we apply two heuristics. First, we assume that the first table in the schema typically corresponds to the central entity (e.g., the tourist in a tourism booking system). Second, we leverage the structure of the input document, where each paragraph often describes a distinct instance of this central entity. Accordingly, we associate each paragraph with a unique identifier, which serves as the primary key for the first table. In particular, SQUiD uses an LLM to detect the number of distinct entity instances in the document and assign a unique identifier to each paragraph. Once assigned, each triplet is prefixed with its corresponding identifier. For example: This structure ensures that all extracted values are correctly grouped by the entity instance they describe, and that the same identifier can be used to link rows across tables during the population stage. # 3.3 Table Population Challenge. This stage constructs tuples for each table using the values identified in the previous stage, presenting two challenges. First, each value must be correctly aligned with its corresponding table column, meaning the LLM must output tuples in a schema-aligned format. However, extracting structured information in a single generation often results in malformed outputs—especially when the target format (e.g., JSON) is complex. Second, we must maintain referential integrity: references to the same entity instance must remain consistent across related tables. For example, a tuple in the Trip table may refer to a destination (e.g., Rome) and a traveler (e.g., Sophia), who also appears in the Traveler table. Here, the traveler ID used in the Trip table must match the primary key of the corresponding tuple in the Traveler table (Fig. 4). Approach. Before delving into the details, we remind readers that SQUiD has three possible inputs for table population: (1) text alone, (2) text with symbolic triplets, and (3) text with schema-aligned triplets. Including all three in a single prompt increases context length and can degrade output quality. Instead, each source is used independently as input to the prompt, and the resulting tuples are later combined. This is akin to ensemble learning in ML (Polikar, 2012), allowing us to leverage the complementary strengths of each input. We now describe the process of table population. To address the value-alignment challenge, we use a structured format that is incrementally generatable by the LLM. Instead of emitting the entire structure at once, the format supports iterative generation, which reduces formatting errors. We ensure referential integrity by incorporating carefully chosen guidelines in the prompt that is compatible with the above format. In particular, we leverage tool use in LLMs (Qu et al., 2025) by introducing a lightweight tool extract that outputs one structured record at a time according to a given schema. This approach helps the LLM remain consistent with the expected output format. After generating the records, we parse the output to extract each column-value pair for every tuple. # 3.4 Database Materialization Challenge. A naïve approach is to prompt LLMs with all prior schema and value information to generate the corresponding SQL INSERT statements directly. However, this method is both inefficient and error-prone. We observe that this is akin to a “program synthesis” task—it not only requires the production of a large number of redundant tokens, which can be costly; but is also brittle to slight mistakes (e.g., a slightly-malformed SQL statement will produce execution errors). Approach. Instead, we observe that the required SQL statements are well-defined—creating specific tables and then inserting the corresponding tuples to these tables. Therefore, we decouple the materialization step from the LLM by parsing the model’s output from the previous stage to programmatically construct executable SQL code. Specifically, we generate CREATE TABLE and INSERT INTO statements (as shown in Fig. 2) which are executed on a local SQLite instance to instantiate the database. This separation enables deterministic parsing, ensuring syntactically correct SQL statements. statistics. We categorize the text difficulty as easy (e.g., Tourism, Finance), medium (e.g., Education, California Schools), or hard (e.g., Mental Health, Superheroes), based on domain complexity, record sparsity, and LLM-induced verbosity. Table 1: Dataset statistics Models. We test five state-of-the-art models: GPT-4O (OpenAI, 2024), DEEPSEEK-V2.5 (DeepSeek AI, 2024), CLAUDE 3.7 SONNET (Anthropic, 2024), LLAMA-3-8B-INSTRUCT (Meta AI, 2024), and QWEN3-8B (Alibaba, 2024). Metrics. We propose a suite of novel metrics for a principled evaluation of the Text2R task, which are summarized in Table 2. Schema Evaluation. We evaluate the quality of generated database schemas along three dimensions: entity coverage, primary key coverage, and foreign key coverage. Entity coverage assesses whether each column from the ground truth is represented in the generated schema. A column is considered covered if there exists a semantically equivalent column (based on cosine similarity between column names) in the output. Primary key coverage checks whether each generated table defines at least one primary key, while foreign key coverage evaluates whether all foreign keys correctly reference primary keys in valid, related tables within the schema. The last two metrics assess syntactic constraints that are essential for the correctness of relational database schemas. # 4 Evaluation Setup Dataset. The Text2R task requires a text document paired with a ground-truth relational database—however, no existing benchmarks directly support this. To fill this gap, we introduce an automated dataset creation pipeline: starting from relational databases or CSV files (using column names and tuple values as ground truth), we prompt an LLM to generate textual descriptions of the tuples, which serve as the input for Text2R. Using this approach, we construct two datasets: (1) BIRD Dataset—covering six domains from the BIRD Text2SQL benchmark (Li et al., 2024); and (2) Kaggle Dataset—containing CSV files from three domains (tourism, education, finance) (Kiattisak, 2023; Becker and Kohavi, 1996; Rai, 2023), which reflect more user-centric, realistic data often missing in BIRD. Table 1 summarizes the dataset Tuple Evaluation. Relational databases store data across multiple tables; therefore, evaluating the quality of such databases requires a holistic view that goes beyond individual tables or isolated values. To enable a principled evaluation, we flatten the schema into a single table—commonly referred to as a denormalized table (Elmasri and Navathe, 2016)—by performing a JOIN across all tables. In our databases, each table maintains a many-to-one or one-to-one relationship with a central table, enabling this complete JOIN of the entire schema. This consolidated table captures complete entityrelationship instances in a unified format. We generate two denormalized tables: one from the ground-truth database and one from the database produced by SQUiD. The two are then compared to assess the accuracy of the generated database. We propose five novel metrics to evaluate the quality of the generated tuples along two dimensions: syntactic and semantic validity. Syntactic validity assesses whether the generated databases adhere to correct structural and relational rules. It is measured using: (1) Database Construction Success Rate, which measures the percentage of generated SQL statements that successfully materialize into databases with at least one non-null tuple, (2) Referential Integrity Rate (RRIR), which measures the fraction of foreign-key joins that yielded valid (non-null) tuples. Table 2: Novel evaluation metrics for Text2R: GT denotes ground truth and DB denotes the generated databases Semantic validity evaluates the comprehensiveness and correctness of the values populated. It is measured using: (1) Tuple Coverage, which measures the fraction of the ground truth tuples recovered; (2) Value Coverage, which measures the fraction of ground truth values populated; and (3) Column Consistency, which checks whether each value appears in its correct column. Baseline. Our Text2R task is novel, and prior work targets fundamentally different objectives (see Sec.2), making direct comparison infeasible. To address this, we design a tailored baseline: using zero-shot prompting, we generate CREATE TABLE and INSERT INTO SQL statements directly from the input text, then execute them in SQLite to instantiate the database. Prompt details are in Appendix G. # 5 Experiments and Analysis We evaluate the performance of SQUiD based on the following three research questions (RQs): • RQ1. Can SQUiD generate a high-quality relational schema? • RQ2. Can SQUiD generate accurate relational tuples to populate the tables? • RQ3. How do SQUiD’s design choices affect performance? # 5.1 RQ1: Schema Evaluation As described in Sec.3.1, we evaluate two prompting strategies for schema generation: Direct and Chain-of-Thought (CoT). Table 3 summarizes the results. We only consider schemas that match the format specified in the prompt, as this is required for SQUiD to process them later. We evaluate both syntactic validity—using primary key coverage (PKC) and foreign key coverage (FKC)—and semantic validity, using entity coverage (ECS). We first highlight general observations across all three metrics, followed by specific analysis. Overall, CoT consistently outperforms Direct across difficulty levels; except CLAUDE, which performs better with Direct but struggles with CoT due to format violations, likely due to overthinking (Liu et al., 2024b). QWEN-8B consistently fails to produce valid schemas, likely due to poor support for structured output tasks (Liu et al., 2024c). Syntactic Validity. We observe that most CoTbased generations achieve full PKC and FKC, except GPT, which drops to $6 6 . 6 7 \%$ FKC in the medium dataset. This is because GPT occasionally generates a single table with no foreign key, when the text contains only a few entities. Semantic Validity. For entity coverage ECS, DEEPSEEK with CoT performs the best, followed by LLAMA-8B and GPT–which show minor drops due to their tendency to generate paraphrased column names (e.g., “heritage” or “ethnicity” instead of “race”), whereas DEEPSEEK aligns more closely with the ground truth. In terms of performance across domains (Appendix D), DEEPSEEK achieves the highest entity coverage in the Education domain $( 9 1 . 0 8 \% )$ and the lowest in the Mental Health domain $( 3 8 . 9 7 \% )$ . The ground truth of the latter has complex column names, such as “questiontext" and “answertext", suggesting that domain complexity significantly affects the quality of the generated schema. # 5.2 RQ2: Tuple Evaluation Syntactic Validity. Table 4 reports the Database Construction Success Rate (DBR) and the improvement in Referential Integrity Rate (RRIR) over the baseline. We highlight three observations. First, SQUiD achieves perfect DBR $( 1 0 0 \% )$ across all models and difficulty levels, except for using DEEPSEEK on hard examples, where it drops slightly to $98 \%$ . This indicates the robustness of SQUiD in consistently generating syntactically valid databases. In contrast, the baseline DBR varies widely—from as low as $9 . 7 \%$ (GPT) to $5 8 . 2 \%$ (CLAUDE) on average. Next, we turn to referential integrity. We note that SQUiD ’s RRIR is a conservative (lower-bound) estimate, since records with missing values in the ground truth are treated as invalid under our metric. Nevertheless, SQUiD still achieves significant improvements over the baseline. For example, GPT exhibits the highest improvement $4 6 . 5 9 \times$ on easy examples). QWEN8B also achieve notable average improvements of $3 . 5 2 \times$ . Although LLAMA-8B achieves perfect DBR, its RRIR does not improve on the medium dataset, suggesting its baseline already exhibits relatively strong referential integrity. Table 3: Schema evaluation: Entity (ECS), Primary Key (PKC) and Foreign Key (FKC) coverage scores. “–”: schema generation failures that violate the requested structure in our prompts. CLAUDE-CoT and QWEN-8B are omitted due to such failures. Table 4: Database Construction Success Rate $( \% )$ and the improvement factor in Referential Integrity Rate in SQUiD compared to the baseline. and metrics. Notably, all 8B-parameter models (LLAMA-8B, QWEN-8B) under SQUiD significantly outperform all larger model baselines (GPT, CLAUDE, DEEPSEEK). In particular, although QWEN-8B’s baseline lags behind those of CLAUDE and DEEPSEEK, its performance under SQUiD surpasses them—highlighting the effectiveness of our approach. Second, on average, all models using SQUiD achieve high TC $( \geq 0 . 9 5 )$ and strong VC/CC $( \geq 0 . 7 0 )$ , with GPT showing the largest improvement over its baseline ( $1 7 . 7 5 \times$ improvement on CC). This is primarily because failed database generations are assigned zero scores, and as shown in Table 4, GPT performs poorly in database construction under the baseline setting. Third, even for models with relatively strong baseline performance, such as LLAMA-8B, SQUiD improves VC and CC by $4 . 1 \times$ and $5 . 5 \times$ on hard examples, respectively. Table 5: Tuple evaluation via Tuple Coverage (TC), Value Coverage (VC) and Column Consistency (CC). Best scores and improvement factors across models in bold. Gray indicates that SQUiD on all 8B models outperforms larger models. Semantic Validity. Table 5 reports Tuple Coverage (TC), Value Coverage (VC), and Column Consistency (CC) with three findings. First, SQUiD consistently outperforms the baseline across all models Table 6: Impact of different value source. The first three columns represent individual prompt settings, while the last three correspond to post-generation ensembling. $\mathbb { T } \oplus \mathbb { S }$ combines tuples generated from $\mathbb { T }$ and S while $\mathbb { T } \oplus \mathbb { L }$ combines $\mathbb { T }$ and L. SQUiD combines outputs from all three prompts. # 5.3 RQ3: Impact of SQUiD’s Design Choices We now evaluate the impact of SQUiD’s design choices on value identification and table population. Recall that we consider three different prompts for table population based on their input source: (1) text only $( \mathbb { T } )$ , (2) text with symbolic triplets $\mathbf { \boldsymbol { ( \mathbb { S } ) } }$ , and (3) text with schema-aligned triplets (L). SQUiD combines the rows generated from all three prompts. Table 6 evaluates how these different value sources affect the quality of the generated tuples, with the following observations. First, using triplets significantly improves value coverage compared to extracting them from the text alone. This is evident from the observation that SQUiD outperforms $\mathbb { T }$ by $5- 1 2 \%$ . Second, we examine how to best incorporate the triplets: whether to concatenate them with the input text in a single prompt, or to generate tuples separately and combine them post-hoc (ensembling). SQUiD adopts the latter strategy, and our results support this choice. Specifically, in the individual prompt setting, $\mathbb { T }$ outperforms both S and $\mathbb { L }$ in all but one case (CLAUDE). In contrast, the ensemble approaches $\mathbb { T } \oplus \mathbb { S }$ , T⊕L and SQUiD) consistently outperform all the individual prompts. This suggests that including triplets directly in the input prompt increases context length, which degrades model performance—likely due to context window saturation (Liu et al., 2024a). Finally, we evaluate our design choice of combining triples generated from symbolic tools and schema-aligned triplets from LLMs. Overall, T L outperforms $\mathbb { T } \oplus \mathbb { S }$ across most models on average, except for CLAUDE and DEEPSEEK. SQUiD consistently yields the best score, indicating that each source captures complementary information. LLM-generated triplets are schema-aware and can correctly group multi-word values under the correct columns (e.g., mapping “car rental” to the transportation mode column, whereas symbolic tools only captured “car”). However, LLMs sometimes paraphrase values (e.g., “low income” to “modest income”), whereas symbolic tools extract values verbatim, yielding closer alignment to the input. # 6 Related Work Summarizing Structures. Text-to-table generation (Wu et al., 2022; Sundar et al., 2024; Li et al., 2023; Deng et al., 2024; Arora et al., 2023; Jain et al., 2024) projects explore sequence-to-sequence modeling, LLM prompt engineering, and structured summarization techniques. However, they can only generate flat tables, and cannot capture the relational database model in our work. Manipulating Existing Databases. The goal of these projects is to leverage LLMs to interact with existing relational databases—such as to generate SQL queries from text (Hong et al., 2024; Pang et al., 2020), or to update them using natural language (Jiao et al., 2024). However, none of these works can synthesize a relational database from scratch, which is what SQUiD tackles. Non-LLM Approaches. Prior to LLMs, integrating text into relational structures relied on traditional pipelines that combine information extraction, schema induction, and entity linking (Zhang et al., 2016; Smith et al., 2022b; Zhang et al., 2019). These methods rely on statistical or symbolic techniques, but required domain-specific heuristics and did not generalize to noisy or diverse input text.
Relational databases are central to modern data management, yet most data exists in unstructured forms like text documents. To bridge this gap, we leverage large language models (LLMs) to automatically synthesize a relational database by generating its schema and populating its tables from raw text. We introduce SQUiD, a novel neurosymbolic framework that decomposes this task into four stages, each with specialized techniques. Our experiments show that SQUiD consistently outperforms baselines across diverse datasets.
[ "cs.DB", "cs.CL" ]
# 1 Introduction Low-Rank Adaptation (LoRA) [20] is a widely-used parameter-efficient finetuning technique for large-scale pretrained models which enables finetuning billion-scale Large Language Models (LLMs) on a single consumer-grade GPU. This has made it the go-to method for finetuning LLMs in settings with limited computational resources. As a consequence, many new improvements have been proposed in various directions, such as parameter efficiency [17, 12, 54, 22], performance under quantization [10, 50, 26], and rank adaptation [55, 12, 43, 27]. Recent works have also explored an alternative based on Bayesian methods to improve calibration. While this works well to some extent, there is a lot of room for improvement. For example, Laplace-LoRA [51] estimates a posterior that can be used for prediction by using Laplace’s method on the LoRA parameters, but this requires additional post-hoc changes, including calculating a Kronecker-factored Hessian and model linearization. Despite increasing the overhead, only marginal improvements in accuracy are obtained. Another method called BLoB [48] instead estimates the covariance during training using Bayes by Backprop [4]. While it can improve generalization and calibration when using just the mean, the performance degrades when using posterior sampling. BLoB also requires additional implementation tricks. For instance, uncertainty is only considered in one of the two LoRA blocks and flipout is used to introduce randomness [49]. Furthermore, BLoB increases the computation cost compared to standard non-Bayesian LoRA finetuning. Ideally we would like a simpler alternative that can bring more benefits with less overhead. LoRA IVON Higher accuracy, similar speed Better calibrated output h Prompt: + mB VB Free 70 2.9 Aalnidcep enactask c[e7re−axl] fdoary braeawkefeaks.t [x] days a week 60 AdamW This morning, Alice had for breakfast. A: cereal, B: pancake. The answer is: R A mA VA Free 1 $P ( \mathrm { A } | x = 4 )$ $P ( \mathrm { A } | x = 5 )$ T mean variance 0 Time (min) Adam 0.999 0.873 IVON 0.571 True Adam 1.000 0.947 IVON 0.714 True (a) (b) (c) In this paper, we show that a recently proposed natural-gradient variational learning algorithm called IVON [41] is a better alternative to improve LoRA with Bayesian principles. IVON can simply be used to replace conventional optimizers like AdamW [29] as it shares a nearly identical implementation which makes it fast and easy to use. While AdamW only provides a point estimate, IVON also estimates a diagonal Gaussian posterior over parameters during training. This posterior also allows us to add cheap post-training pruning which drastically improves generalization and calibration. These two aspects together comprise our method, which we call IVON-LoRA. IVON-LoRA is easy to implement and achieves significant improvements for finetuning LLMs across various tasks and datasets. For example, on a set of commonsense reasoning tasks IVON-LoRA improves accuracy by $1 . 3 \%$ and reduces calibration error by $5 . 4 \%$ when compared to AdamW for finetuning Llama-3.2-3B, while outperforming other Bayesian-LoRA methods in terms of accuracy with comparable calibration. Accuracy and calibration can also be traded-off when sampling from the learned posterior with varying temperature. Finally, we use the learned posterior for test-time compute scaling and improve math word problem solving accuracy on GSM8k with Qwen-2.5-3B. Overall, we provide an easy-to-use improvement of LoRA using variational learning. # 2 Efficient Finetuning with Low-Rank Adaptation The large sizes of current LLMs make it hard for many practitioners to finetune the full model due to resource constraints. To overcome this, many parameter-efficient finetuning approaches have been proposed. These methods usually either only train parts of the network, for example, the bias term [2], or insert a small number of new parameters, for example, as prefixes to input token sequences [25] or at arbitrary positions in a model [19, 35, 36]. However, these methods either exhibit poor performance or incur inference-time overhead [39]. To tackle this, Low-Rank Adaptation (LoRA) [20] inserts new parameters as a low-rank decomposition of the update applied to the original parameters during finetuning. These new parameters can then be merged with the original parameters to not increase inference overhead. Formally, given a weight matrix $\mathbf { W } _ { 0 } \in \mathbb { R } ^ { d \times k }$ of a pretrained model, LoRA introduces a low-rank decomposition $$ \Delta \mathbf { W } _ { 0 } = \mathbf { B } \mathbf { A } , $$ where $\mathbf { A } \in \mathbb { R } ^ { r \times k }$ , $\mathbf { B } \in \mathbb { R } ^ { d \times r }$ , and $r \ll \operatorname* { m i n } ( d , k )$ . The new weight matrix W is then: $$ \mathbf { W } = \mathbf { W } _ { 0 } + \Delta \mathbf { W } _ { 0 } = \mathbf { W } _ { 0 } + \mathbf { B } \mathbf { A } . $$ The low-rank matrices $\mathbf { A }$ and $\mathbf { B }$ are initialized such that $\Delta \mathbf { W } _ { 0 } = \mathbf { B } \mathbf { A } = 0$ to preserve the original $\mathbf { W } _ { 0 }$ at the beginning of training. During training, only A and $\mathbf { B }$ are optimized, while $\mathbf { W } _ { 0 }$ remains frozen. This drastically reduces the number of trainable parameters from $O ( d k )$ to $O ( 2 d r )$ . For linear layers, the forward pass with LoRA can be computed as: $$ \begin{array} { r } { \pmb { h } = \mathbf { W } _ { 0 } \pmb { x } + \Delta \mathbf { W } _ { 0 } \pmb { x } = \mathbf { W } _ { 0 } \pmb { x } + \mathbf { B } \mathbf { A } \pmb { x } , } \end{array} $$ where $\pmb { x } \in \mathbb { R } ^ { k }$ is the input and $\pmb { h } \in \mathbb { R } ^ { d }$ the output. For Transformers [45], which are our main focus, LoRA is typically applied to the query and value matrices of the attention layers. Despite its computational efficiency, LoRA can lead to overconfident models [47] and lag behind full finetuning in terms of accuracy [3]. To tackle this, Bayesian variants of LoRA have recently been proposed. For example, Laplace-LoRA [51] uses Laplace’s method [30] to learn a posterior distribution over LoRA parameters by estimating the curvature around a point estimate $\mathbf { m }$ trained with conventional learning algorithms like AdamW. That is, m contains all A and $\mathbf { B }$ matrices that are learned. This results in a Gaussian posterior $q ( \pmb \theta ) = \mathcal N ( \pmb \theta \mid \mathbf m , \pmb \Sigma )$ , where $\pmb { \Sigma }$ is the inverse of a Kronecker-factored (KFAC) approximation [32, 38] of the Fisher information matrix. There are several problems with Laplace-LoRA though. First, computing $\pmb { \Sigma }$ requires an additional pass through the training data which is not always available after training. An extra pass through the data also adds computational overhead. Second, it requires a KFAC approximation of the Fisher information matrix which is another overhead. Finally, prediction is done using a linearized model [21], but this requires the Jacobian $\nabla _ { \pmb { \theta } } f _ { \pmb { \theta } } ( \mathbf { x } )$ for the neural network outputs $f _ { \pmb { \theta } } ( \mathbf { x } )$ . This can be prohibitive for LLMs in a standard setting for next-token prediction due to requiring storage of $\mathcal { O } ( \bar { d } \vert \mathcal { V } \vert )$ for an output vocabulary $\nu$ and number of parameters $d$ . A recently proposed method, BLoB [48], circumvents some of these issues by directly learning the mean m and diagonal covariance $\pmb { \Sigma }$ during training with Bayes by Backprop [4]. However, this requires multiple implementation changes to the standard LoRA. For example, only A is treated probabilistically and it is unclear how to prune $\mathbf { B }$ based on the probabilistic information. Moreover, a new variant of flipout [49] is introduced. Altogether, this results in nontrivial implementation changes. In addition, both BLoB and Laplace-LoRA do not give large gains in accuracy over non-Bayesian LoRA finetuning and there is a trade-off between accuracy and calibration. In Sec. 3 and 4, we propose a variational learning method that improves both accuracy and calibration of LoRA with minimal implementation changes. # 3 Variational Low-Rank Adaptation Here we introduce our approach which we call IVON-LoRA. The idea is straightforward: We replace the commonly used AdamW optimizer with IVON [41] which optimizes a variational-Bayesian objective. More formally, let us denote the AdamW objective by $\ell ( \pmb \theta )$ where $\pmb \theta$ is the vector containing all entries of LoRA’s low-rank parameters. IVON-LoRA instead minimizes a Bayesian objective where an expectation of $\ell ( \pmb \theta )$ over a posterior distribution $q ( \pmb \theta )$ is used (shown on the right), $$ \operatorname* { m i n } _ { \pmb { \theta } } \ \ell ( \pmb { \theta } ) \mathrm { v s . } \ \begin{array} { l } { \operatorname* { m i n } _ { \pmb { q } ( \pmb { \theta } ) } \ \mathbb { E } _ { \pmb { q } ( \pmb { \theta } ) } \left[ \ell ( \pmb { \theta } ) \right] + \lambda ^ { - 1 } \mathbb { D } _ { \mathbf { K L } } \big [ q ( \pmb { \theta } ) \left| \right| p ( \pmb { \theta } ) \big ] . } \end{array} $$ IVON uses a diagonal Gaussian $q ( \pmb \theta ) = \mathcal { N } ( \mathbf { m } , \mathrm { d i a g } ( \mathbf { v } ) )$ with a zero mean isotropic Gaussian prior $p ( \pmb \theta )$ with a scalar variance. The mean $\mathbf { m }$ plays a similar role to $\pmb \theta$ obtained by AdamW while the posterior variance v captures additional information about the uncertainty over $\mathbf { m }$ and enables sampling models from $q ( \pmb \theta )$ . The main advantage of our approach is that it only requires a few lines of training code to be changed, thanks to the nearly identical implementation of IVON and AdamW. The key point is that estimation of $\mathbf { v }$ is done automatically through the scale vector $\mathbf { h }$ that adapts the learning rate. Specifically, we set the variance as $\mathbf { v } = 1 / ( \lambda ( \mathbf { h } + \delta ) )$ where $\delta$ is the weight decay and $\mathbf { h } = \nabla ^ { \top } \ell ( \pmb \theta )$ is the diagonal Hessian. Therefore, $\mathbf { v }$ can be obtained for free by estimating gradients at a perturbed $\pmb \theta \sim \mathcal { N } ( { \bf \bar { m } } , \mathrm { d i a g ( } { \bf v } ) )$ to estimate the expectation in Eq. 1 (RHS) and using the reparametrization trick to get h. In practice, even using one Monte-Carlo sample $\pmb \theta \sim \mathcal { N } ( { \bf m } , \mathrm { d i a g } ( { \bf v } ) )$ performs well and incurs almost no overhead (see Sec. 5.4 for a benchmark), but more samples can be taken to improve performance. Similar perturbed training has also been shown to be useful for finetuning LLMs [28, 57, 24]. The hyperparameter $\lambda$ can be seen as an effective training data size, where $\lambda = N$ targets a generalized posterior for $N$ data points [53]. However, $\lambda$ can also be adapted: for example, $\lambda > N$ targets a “colder” posterior [56, 15] which can stabilize training [41]. Conversely, decreasing $\lambda$ during inference can promote more diverse LLM outputs, which can be helpful in scenarios such as combining the outputs of multiple models sampled from IVON during generation [9]. For further details, we refer to Shen et al. [41]. Overall, IVON is an easy-to-use alternative to existing Bayesian approaches that require additional overheads due to post-processing, additional passes through the data, and cumbersome implementation changes. Table 1: Comparison of methods applied to finetuning/finetuned Llama-3.2-3B model across commonsense reasoning datasets, with subscripts indicating standard error of the mean across 5 runs. We show the relative metric improvements achieved over AdamW in blue. # 4 Uncertainty-Guided Pruning (UGP) To further improve performance, we propose a novel pruning method for IVON-LoRA. Parameter pruning is usually motivated by computational efficiency, because it can reduce the size of neural networks by setting a subset of network parameters to zero. This can be done either before, during, or after training. Intuitively, we might want to set parameters to zero which do not influence the model. Previously, the posterior variance has been shown to be useful for identifying such unimportant parameters [14, 4, 11]. We use it here for an uncertainty-guided pruning of LoRA parameters. In IVON, high parameter uncertainty is encouraged by using the relative entropy term in the variational objective in Eq. 1. This should aid in the discovery of prunable LoRA parameters because intuitively such parameters should be the ones that have high uncertainty. In fact, Hessians have long been used for optimal pruning [23] and the h vector in IVON can be seen as an online estimate of the diagonal Hessian (hence the name Online Newton). Recently, IVON’s posterior variance has also been used for budget allocation in AdaLoRA [55] according to the signal-to-noise ratio $| \theta _ { i } | / \sqrt { v _ { i } }$ [5], where $\theta _ { i }$ and $v _ { i }$ are the $i$ -th entry of $\mathbf { m }$ and $\mathbf { v }$ , respectively. We take a similar approach and prune parameters $\theta _ { i }$ with the largest posterior variance $v _ { i }$ . This prunes parameters with the highest parameter uncertainty. In practice, we find that pruning a small fraction of parameters with the highest variance in each weight matrix leads to visible improvements in calibration, while maintaining or even improving accuracy. Hence, in our experiments, we set the pruning ratio to $10 \%$ and apply it to the LoRA adapters after training by default. # 5 Experiments In this section, we evaluate IVON-LoRA on various tasks and datasets. In Sec. 5.1, we assess its performance on reasoning and language understanding datasets. In Sec. 5.2, we conduct an ablation study on the choice of $\lambda$ at test time, which can bring extra improvements to our method. In Sec. 5.3, we investigate the effectiveness of the Uncertainty-guided Pruning (UGP) method. Finally, in Sec. 5.4, we evaluate the training speed and computational overhead of IVON-LoRA. # 5.1 IVON-LoRA Improves Accuracy, Generalization and Calibration # 5.1.1 Results on Commonsense Reasoning Datasets Following the settings in Yang et al. [51] and Wang et al. [48], we begin with evaluating the performance of our method on commonsense reasoning tasks. We use IVON-LoRA to finetune Llama-3.2-3B [13] on six commonsense reasoning datasets, including WinoGrande-S (WG-S), WinoGrande-M (WG-M) [40], ARC-Challenge (ARC-C), ARC-Easy (ARC-E) [7], OpenBookQA (OBQA) [33], and BoolQ [6]. We then measure accuracy and Expected Calibration Error (ECE) on the validation splits and use Negative Log-Likelihood (NLL) as an additional metric for calibration since ECE may be unreliable when annotators disagree [1]. We compare our results to baselines including non-Bayesian LoRA finetuning with AdamW, Laplace-LoRA [51] and BLoB [48]. For BLoB and IVON-LoRA, we report the results acquired by either using the mean of the posterior m (indicated by the suffix “ $@$ mean”) or by using an averaged prediction over 10 samples from the posterior. Please refer to App. A.2 for details on the experimental setup. Table 2: Comparison of methods applied to finetuning/finetuned Llama-3.2-3B model across commonsense reasoning datasets. Different from Table 1, methods use a Bayesian approach at test time (model linearization for LA, posterior sampling for BLoB and IVON-LoRA). Subscripts indicating standard error of the mean across 5 runs. We show the relative metric changes over AdamW in parentheses, with improvements in blue and degradation in red. Table 3: IVON-LoRA can improve the performance of LLMs for reasoning, here on math word problem solving and code generation with uncertainty-aware MBR with 32 outputs. Results are shown in Tables 1 and 2. As shown in Table 1, IVON-LoRA $@$ mean outperforms baseline methods on most datasets in terms of accuracy, often by a large margin. IVON-LoRA $@$ mean on average improves accuracy by $1 . 3 \%$ over AdamW and $0 . 7 \%$ over BLoB $@$ mean. Our method also exhibits significantly improved calibration compared to AdamW baseline. Notably, these improvements are achieved with no test-time overhead at all, since only the mean of the posterior is used for prediction. Next, as shown in Table 2, IVON-LoRA performs significantly better than baseline methods in a more Bayesian setting. IVON-LoRA with posterior sampling still maintains an $1 . 2 \%$ accuracy gain over AdamW, while both BLoB (with posterior sampling) and Laplace-LoRA degrade accuracy. With posterior sampling, IVON-LoRA further reduces average ECE from 13.3 to 10.1 and NLL from 0.80 to 0.65. Notably, this is achieved with neither significant drop in accuracy (as in BLoB) nor a more representative KFAC Hessian or an additional pass through the data to compute that Hessian (as in Laplace-LoRA). # 5.1.2 IVON-LoRA for LLM Reasoning Next, we evaluate IVON-LoRA on LLM reasoning. We train Qwen-2.5-3B [42] on the GSM8k benchmark for math word problem solving [8] and the Conala benchmark for Python code generation [52]. We compare IVON-LoRA evaluated at the learned mean and using the posterior against training with LoRA and AdamW. For using the posterior we use sequence-level uncertainty-aware Minimum Bayes Risk (MBR) decoding [9, Eq. 9]. That is, we first sample multiple outputs for each model sampled from the IVON-LoRA posterior. Then we use a utility function to compare each pair of outputs in the resulting n-best list of size $n$ . For GSM8k we use a 0-1-loss and perform majority voting on the final numerical solution. For Conala we use CodeBertScore [58]. Figure 2: Improvements obtained with IVON-LoRA on $\mathbf { G S M 8 k }$ increase with n-best-size. For smaller $n$ IVON-LoRA $@$ mean can be most efficient (a). Furthermore, we show that pruning is essential for this, because high-uncertainty parameters are not included when sampling models (b). For IVON-LoRA $@$ mean and AdamW we sample 32 outputs. When using the posterior, we sample 4 models and then 8 outputs per model to match compute budget. For details on the experimental setup, please refer to App. A.3. Results are shown in Table 3. We find that both IVON-LoRA $@$ mean and IVON-LoRA at the posterior provide strong improvements over AdamW for GSM8k. For Conala, we find improvements especially in syntax and data flow matching when compared to a reference solution, as well as comparable CodeBertScore and CodeBLEU [37]. # 5.1.3 IVON-LoRA Enables Test-Time Compute Scaling Next, we repeat the experiment on GSM8k but scale the size of the n-best list up to 512 output samples in total. For IVON-LoRA $@$ mean and AdamW we just sample the outputs from one model. For IVON-LoRA with posterior we sample 8 outputs each from (n-best-list-size/8) models, i.e. we use 64 model samples from the posterior for the $n = 5 1 2$ and sample 8 outputs from each model. Fig. 2a shows that $\operatorname { I V O N } @$ mean outperforms AdamW especially for smaller $n$ . As $n$ and the number of models grow the benefit of the posterior become more and more apparent with an accuracy improvement of $3 . { \bar { 7 } } \%$ for $n = 5 1 2$ . In Fig. 2b we further show that pruning is essential for this, with large improvements for both IVON-LoRA $@$ mean and IVON-LoRA when it is used. Intuitively, this might be, because sampling high-uncertainty parameters can lead to destructive behavior in the model. Altogether, IVON-LoRA provides a strong and easy-to-use method for test-time compute scaling. # 5.1.4 Results for Out-of-Distribution Settings Here, we evaluate the performance of IVON-LoRA under out-of-distribution settings. Following Yang et al. [51] and Wang et al. [48], we evaluate the performance of LoRA adapters trained on OBQA, as in Sec. 5.1.1, on datasets with different levels of distribution shifts. Specifically, we use ARC-E and ARC-C for smaller and the college-level chemistry, physics, biology, computer science, and math subsets from the MMLU benchmark [18] for larger distribution shifts. Results are shown in Table 4. IVON-LoRA can also improve accuracy under mild distribution shifts (indicated by the $1 . 8 \%$ improvements on ARC-C and $1 . 7 \%$ on ARC-E) and still achieves good calibration. Under severe distribution shifts IVON-LoRA still maintains similarly good calibration as Laplace-LoRA and BLoB, while standard LoRA finetuning with AdamW exhibits significant overconfidence. Table 4: Comparison of different methods on in- and out-of-distribution scenarios. We use the LoRA adapters trained on OBQA and evaluate their performance on different levels of distribution shift. We find that IVON-LoRA can cope well with such distribution shifts and maintains better or similar accuracy as well as better calibration than LoRA with AdamW. Table 5: Performance comparison on the test sets of GLUE benchmark using DeBERTa-v3-base as the base model. Results are averaged over 5 runs with different random seeds. # 5.1.5 Results on GLUE In this section, we evaluate IVON-LoRA on the GLUE benchmark [46]. We use DeBERTa-v3- base [16] and compare our method to full-parameter finetuning and LoRA with AdamW. We only evaluate the performance of IVON-LoRA at the mean of the posterior and do not employ UGP. Please refer to App. A.4 for details on the experimental setup. We present the results in Table 5. Similar to the results in Hu et al. [20], we observe that LoRA achieves a higher average score than full-parameter finetuning. Notably, IVON-LoRA outperforms vanilla LoRA by 0.5 on average. # 5.2 Adjusting Temperature Can Improve Performance As described in Sec. 3, the $\lambda$ parameter in IVON-LoRA’s posterior variance can be adjusted during test time to improve performance. In particular, we find that using a moderately larger $\lambda _ { \mathrm { t e s t } } = \tau \lambda$ can further improve generalization with minimal degradation in calibration. To show this, we evaluate IVON-LoRA on the larger distribution shift setting from Sec. 5.1.4 with different $\lambda _ { \mathrm { t e s t } }$ . Specifically, we test the performance with $\tau$ set to 2, 5 and 10. Results are shown in Table 6. Notably, setting $\tau = 5$ and $\tau = 1 0$ improves the average accuracy of IVON-LoRA by $1 . 1 \%$ and $1 \%$ , respectively, with virtually no increases in ECE and NLL. Table 6: Evaluating OBQA-finetuned Llama-3.2-3B model on out-of-distribution MMLU subsets. The setting is the same as in Sec. 5.1.4, except varying effective sample size or inverse temperature of the posterior. Sampling with lower temperature can improve accuracy at similar calibration. Figure 3: Uncertainty-Guided Pruning is essential for improving the performance of IVON-LoRA. We show the accuracy and log-likelihood of IVON-LoRA on commonsense reasoning, with the $\mathbf { X }$ -axis indicating the pruning strength. We observe that UGP can significantly improve calibration with sometimes even minor improvement in accuracy, while also outperforming random pruning. # 5.3 UGP Improves Calibration and Accuracy Here we show that our parameter pruning method is effective by evaluating on the commonsense reasoning datasets from Sec. 5.1.1. We test the performance under different pruning strengths ranging from $0 \%$ to $50 \%$ of the parameters, and compare the performance of our method with random pruning. The results are shown in Fig. 3. First, we observe that pruning a small number of parameters can significantly improve the calibration with little degradation or sometimes even minor improvement in accuracy. Second, we observe that UGP consistently maintains higher accuracy and log-likelihood across varying pruning levels when compared with random pruning, showing that IVON’s posterior variance can be a good indicator of parameter importance. Overall, our results show that UGP can be a simple but effective way to improve the generalization and calibration of IVON-LoRA. Figure 4: The training speeds of IVON and AdamW are similar. We plot validation accuracies (without pruning) of the two methods versus time in minutes. Results are averaged over 5 runs. # 5.4 Computational Efficiency Finally, we observe that the overhead of IVON-LoRA is negligible compared to AdamW. We profile our training code on an NVIDIA GeForce RTX 4090 GPU. In our test run with WinoGrande-S dataset, the forward pass, loss computation, and backward pass of a training step take in total $1 6 7 . 0 \mathrm { m s }$ on average. As for the overhead of IVON, the sampling procedure and the optimization step of each training step take $1 . 0 \mathrm { m s }$ and $0 . 5 \mathrm { m s }$ on average, respectively, which is less than $1 \%$ of the per-step running time. The overall training speed of IVON-LoRA and AdamW are similar as shown in Fig. 4. # 6 Limitations A limitation shared with other Bayesian LoRA methods [51, 34] is that the learned posterior over the increment low-rank parameters is non-Gaussian because it is a product of two Gaussian random variables. If this is a problem, a workaround could be to use a variational low-rank correction to correct the mean and variance of a Laplace approximation of the original model. van Niekerk and Rue [44] propose such a low-rank approach in the context of latent Gaussian models, and adapting these ideas to large language models may represent an interesting direction for future work.
Bayesian methods have recently been used to improve LoRA finetuning and, although they improve calibration, their effect on other metrics (such as accuracy) is marginal and can sometimes even be detrimental. Moreover, Bayesian methods also increase computational overheads and require additional tricks for them to work well. Here, we fix these issues by using a recently proposed variational algorithm called IVON. We show that IVON is easy to implement and has similar costs to AdamW, and yet it can also drastically improve many metrics by using a simple posterior pruning technique. We present extensive results on billion-scale LLMs (Llama and Qwen series) going way beyond the scale of existing applications of IVON. For example, we finetune a Llama-3.2-3B model on a set of commonsense reasoning tasks and improve accuracy over AdamW by 1.3% and reduce ECE by 5.4%, outperforming AdamW and other recent Bayesian methods like Laplace-LoRA and BLoB. Overall, our results show that variational learning with IVON can effectively improve LoRA finetuning.
[ "cs.LG", "cs.AI", "cs.CL", "stat.ML" ]
# 1 Introduction The global optimization of black-box objective functions under expensive, black-box constraints— where both are only accessible via costly point-wise evaluations—is a fundamental problem in fields such as machine learning (ML), engineering design, robotics, and natural sciences. For instance, in automated machine learning [Hutter et al., 2019], black-box optimization techniques, and in particular Bayesian optimization (BO) [Garnett, 2023], are commonly used to tune hyperparameters of ML models to maximize predictive performance under strict constraints on model inference time, memory footprint, or energy consumption. This setup is common in frameworks like Auto-sklearn [Feurer et al., 2022], AutoKeras [Jin et al., 2023], or custom pipelines for neural architecture search under deployment constraints [Cai et al., 2019]. Constrained BO is also widely used in crashworthiness optimization [Raponi et al., 2019, Du et al., 2023] to efficiently tune design parameters for objectives like weight or energy absorption, under constraints such as intrusion depth or peak acceleration. In these settings, evaluating either the objective or constraints can be costly and time-consuming, often relying on physical experiments or computationally intensive simulations. The challenge of efficiently addressing black-box constrained problems is further amplified in high-dimensional settings [Powell, 2019], meaning problem settings with dozens of decision variables in the context of BO. In fact, as the volume of the search space increases, sampling becomes sparse, surrogate models like Gaussian process regression models become harder to fit due to the reduced correlation between points, optimization landscapes become more complex, with many local optima and constraint boundaries that are trickier to approximate, and feasible regions become narrow and non-convex islands in a vast space. A large portion of evaluations may land in infeasible zones, and even identifying a single feasible point may consume a large portion—or even all—of the available evaluation budget. This renders many existing BO methods ineffective. Our contribution. Our work builds directly upon the Scalable Constrained Bayesian Optimization (SCBO) algorithm [Eriksson and Poloczek, 2021], which introduced a scalable trust-region-based framework for constrained BO in high-dimensional settings. SCBO demonstrated that localizing the search using dynamically-adapted trust regions, rather than relying on global surrogate optimization, offers both scalability and performance benefits. However, the trust regions defined by SCBO make only partial use of the information deriving from the modeling of the problem constraints, specifically to center the trust region and select the next candidate solutions, demonstrating particular effectiveness in the case where feasible regions are relatively easy to find—a condition that often does not hold in the most challenging constrained problems. We hence propose the Feasibility-Driven Trust Region Bayesian Optimization (FuRBO) algorithm, specifically designed to tackle high-dimensional constrained optimization problems where finding any feasible point is itself difficult. FuRBO retains the core idea of adaptive trust regions but shifts the focus to feasibility-first exploration relying on the constraint isocontour predicted by the surrogate model. To construct the trust region, FuRBO leverages both the objective and constraint surrogate models. At each iteration, FuRBO samples a set of points—referred to as inspectors—uniformly distributed within a ball of radius $R$ centered at the best candidate found so far. These inspectors are evaluated over the constraint landscape and used to estimate the likely location and shape of the feasible region. The most promising inspectors, ranked using both the objective and constraint models, determine the position, shape, and size of the trust region for the next search step. Within this feasibility-guided trust region, it then applies Thompson sampling on the objective and constraint models to identify new promising points to query. Through a series of comprehensive experiments on the full BBOBconstrained COCO benchmark suite [Dufossé et al., 2022] and other physics-inspired benchmarks, both containing problems with increasing constraint complexity, we show that FuRBO, thanks to its landscape-aware mechanism that uses inspector sampling to guide the search toward promising feasible regions, either ties or outperforms other state-of-the-art alternatives for constrained BO, with evident superiority in settings in which feasibility is rare and hence difficult to locate. Reproducibility: The code for reproducing our experiments, along with the whole set of figures, is available on GitHub1. # 2 Related work Bayesian optimization (BO) [Garnett, 2023] is a sample-efficient, model-based optimization framework for solving expensive black-box problems where function evaluations are costly or timeconsuming. On continuous search spaces, a Gaussian Process (GP) is commonly used in BO to define a prior distribution over the unknown objective function, capturing assumptions about its smoothness and variability. The process begins with an initial set of evaluated points (also known as Design of Experiments [Forrester et al., 2008]) typically selected through random sampling or space-filling designs. Once data from the initial evaluations is available, the GP is conditioned on these observations to yield a posterior distribution, which provides an approximation of the unknown objective function along with uncertainty estimates. An acquisition function (AF) is then used to decide where to evaluate next by balancing exploration (sampling in regions of high uncertainty) and exploitation (sampling where high objective values are likely). This iterative process continues until the evaluation budget is exhausted or convergence is reached. Constrained BO extends the classical BO framework to settings where one must optimize an objective function subject to one or more unknown or expensive-to-evaluate constraints. This is common in real-world scenarios, such as engineering design or hyperparameter tuning, where feasible solutions must satisfy safety, performance, or resource limits. Despite most work on BO has focused on unconstrained scenarios, some extensions to constrained optimization problems have been introduced in the last years. The constrained expected improvement (CEI), introduced by Schonlau et al. [1998] and popularized by Gardner et al. [2014], is the earliest and most widely used technique for handling constraints in BO. It extends the standard Expected Improvement (EI) AF to handle constraints by multiplying the improvement with the probability that a candidate solution is feasible. This allows the algorithm to prioritize sampling in regions that are not only promising in terms of objective value but also likely to satisfy the given constraints. The Predictive Entropy Search with Constraints (PESC) AF by Hernández-Lobato et al. [2016] focuses in particular on problems with decoupled constraints, in which subsets of the objective and constraint functions may be evaluated independently. It extends the entropy search AF by not only reducing uncertainty about the location of the global optimum, but doing so under the constraint that the solution must also be feasible. Picheny et al. [2016] proposed SLACK, which augments the standard constrained Bayesian optimization framework with slack variables to reformulate equality constraints as inequalities. By combining this with an augmented Lagrangian approach and EI, they demonstrated improved performance in problems with equality constraints. Ariafar et al. [2019] advanced the augmented Lagrangian framework by integrating the Alternating Direction Method of Multipliers (ADMM), allowing a more scalable and structured optimization of constrained black-box problems. Their method also uses EI to select query points and is particularly suited to problems with multiple and decoupled constraints. Ungredda and Branke [2024] proposed a variant of the Knowledge Gradient (KG) AF—called the constrained Knowledge Gradient (cKG)—to handle constrained optimization problems. In cKG, feasibility is incorporated into the Bayesian lookahead by weighting the expected utility from the objective GP with the estimated probability of feasibility from the constraint GPs, guiding the search toward points that are both promising and likely to be feasible. All of these methods were not designed with high-dimensional problems in mind and often struggle with scalability. This limitation was instead addressed in the design of the Scalable Constrained Bayesian Optimization (SCBO) framework by Eriksson et al. [2019], which introduced a surrogate-based framework that models the objective and each constraint separately, allowing for greater flexibility and modularity in the modeling process. It uses trust regions as a core component, using them to search for new candidate solutions locally, in regions with predicted high feasibility and optimality, allowing for robust scalability to high-dimensional constrained spaces. Despite the introduction of new methods in recent years (see the survey by Amini et al. [2025] for a comprehensive overview), SCBO remains a state-of-the-art approach for constrained high-dimensional BO. Unlike most methods reviewed, despite SCBO being developed in response to practical challenges, its performance has been rigorously benchmarked also on standard test problems. This has contributed to its robustness, establishing SCBO as a standalone optimization framework that is also accessible through the well-known BoTorch [Balandat et al., 2020] package. For this reason, we developed our method, FuRBO, building on SCBO as a foundation, but redefining the trust region design procedure to more effectively address problems with narrow, hard-to-find feasible regions. # 3 Problem definition We consider the problem of minimizing a black-box objective function $f : \Omega \to \mathbb { R }$ subject to multiple constraints. The goal is to identify an optimal design point $x ^ { * } \in \Omega \subset \mathbb { R } ^ { D }$ that maximizes the objective while satisfying all constraints: $$ \begin{array} { c } { x ^ { * } = \displaystyle \arg \operatorname* { m i n } _ { x \in \Omega } \ f ( x ) } \\ { \mathrm { s u b j e c t \ t o } \quad c _ { k } ( x ) \le 0 , \quad \forall k \in \{ 1 , . . . , K \} } \end{array} $$ Alongside the objective, the constraint functions $c _ { k } \ : \ \Omega \ \to \ \mathbb { R }$ return a vector $\mathbf { c } ( x ) \mathbf { \Psi } = \mathbf { \Psi }$ $[ c _ { 1 } ( x ) , \ldots , c _ { K } ( x ) ]$ that quantifies the feasibility of a sample. A point is considered to be feasible if it belongs to the set $\Omega _ { \mathrm { f e a s } } = \{ x \in \Omega \mid c _ { k } ( x ) \le 0 \forall k \in \{ 1 , \dots , K \} \}$ . We assume a limited evaluation budget of $1 0 D$ function evaluations, reflecting the practical setting of real-world applications where each evaluation is costly and only a small number of queries is affordable. This low-budget scenario is precisely where BO methods are most effective. After using the total evaluation budget, the algorithm recommends a solution $x _ { \mathrm { b e s t } } \in \Omega$ . If $x _ { \mathrm { b e s t } } \in { \mathcal { F } }$ , we measure the quality of a recommendation by loss, i.e., the simple regret, under feasibility conditions: $l ( x _ { \mathrm { b e s t } } ) = f ( x _ { \mathrm { b e s t } } ) - f ( x ^ { \ast } )$ , where $x ^ { * }$ is the global optimum of the problem. If $x _ { r } \notin { \mathcal { F } }$ , the solution is considered infeasible and its maximum constraint violation $( V _ { \mathrm { m a x } } ( x ) = \mathrm { m a x } _ { i = 1 , . . . , K } \mathrm { m a x } \{ 0 , c _ { k } ( x ) \} )$ is returned instead. # 4 Feasibility-Driven Trust Region Bayesian Optimization To overcome some of the limitations of optimizing highly constrained problems, we propose a new algorithm: FuRBO. Our method shares the idea of using trust regions for BO introduced by Eriksson et al. [2019]. However, instead of using the best-evaluate sample to define only the center of the trust region, we identify its position and extension using the information available from the approximation models of both the objective and constraint functions. What distinguishes our approach from SCBO is the formulation of the trust region. We therefore begin by outlining the SCBO framework, followed by a detailed explanation of how the trust region is defined in FuRBO. # 4.1 SCBO algorithm The SCBO framework extends the TuRBO algorithm [Eriksson et al., 2019] to address problems with black-box constraints, by preserving most of the algorithm structure. It begins by evaluating an initial design and fitting Gaussian Process (GP) models to the objective $f ( x )$ and constraints $c _ { k } ( x )$ for $k = 1 , \ldots , K$ . A trust region (TR) is initialized around the best feasible point; if none is found, it is centered at the point with the smallest constraint violation. The algorithm then iteratively proceeds until the evaluation budget is exhausted. In each iteration, a batch of $q$ candidate points is identified within the current trust region using a Thompson Sampling (TS) [Thompson, 1933] AF: to return each of the $q$ points, after sampling a large set of $r$ candidate solutions within the TR, a realization $\{ ( \hat { f } ( x _ { i } ) , \hat { c } _ { 1 } ( x _ { i } ) , . . . , \hat { c } _ { K } ( x _ { i } ) ) | 1 \le i \le r \}$ from the posterior of both objective and constraints is sampled, and the candidate with maximum utility among those that are predicted to be feasible is added to the batch. Once the batch of $q$ points is selected, the algorithm evaluates both the objective and constraint functions at these locations. The TR is then updated: its center is moved to the best feasible point found so far, some success/failure counters $( n _ { s } , n _ { f } )$ are updated, and the TR size $L$ (same for all dimensions) is adjusted if either $n _ { s }$ or $n _ { f }$ reach some threshold for the update, $\tau _ { s }$ and $\tau _ { f }$ , respectively. If $L$ becomes smaller than a predefined threshold $L _ { \mathrm { m i n } }$ , the whole procedure is reinitialized. At the end of the optimization, SCBO recommends the best feasible point found, i.e., the point with the smallest objective value among those satisfying all constraints $c _ { k } ( x ) \leq 0$ , for $k = 1 , \ldots , K$ We point the reader to the original paper by Eriksson and Poloczek [2021] for more details. # 4.2 FuRBO algorithm The main novelty of FuRBO is the definition of the TR. The other optimization steps are shared with SCBO [Eriksson and Poloczek, 2021]. Nevertheless, for the sake of completeness, we present in this section the entire optimization flow. We provide an illustration of the update procedure for the TR in Figure 1 and the pseudocode of the entire FuRBO framework in Algorithm 1, with the lines that differ from SCBO highlighted in yellow for clarity. As shown in line 1 of Algorithm 1, FuRBO begins by sampling the entire search space $\Omega$ , following the standard initialization procedure of vanilla BO. It then evaluates the sample points on the true objective and constraint functions, hence generating a set $\boldsymbol { \mathcal { S } } = \{ ( \boldsymbol { x } _ { i } , f ( \boldsymbol { x } _ { i } ) , \mathbf { c } ( \boldsymbol { x } _ { i } ) ) \} _ { i = 1 } ^ { N }$ of evaluated points. Here, $\mathbf { c } ( x _ { i } ) = [ c _ { 1 } ( x _ { i } ) , \hdots , c _ { K } ( x _ { i } ) ]$ is a vector with as many components as the number of constraints defining the problem. Let ${ \mathcal { F } } = \{ x _ { i } \in { \mathcal { S } } \mid c _ { k } ( x _ { i } ) \leq 0 , \forall k = 1 , \ldots , K \}$ be the feasible set, i.e., the subset of feasible points in $s$ according to all the constraints. Let ${ \bar { \mathcal { F } } } = S \setminus { \mathcal { F } }$ be its complement. At each iteration of the optimization procedure, until the evaluation budget is exhausted, the following steps are performed. Figure 1: One iteration of FuRBO. The leftmost panel shows the true objective and constraint isocontours, with the global optimum in red. The next two panels show surrogate models of the objective (top) and aggregated constraint (bottom), built from evaluated points (black dots); the current best solution is marked in red. Inspectors (white crosses) are sampled around this point and ranked by feasibility and objective value. The top $P _ { \% }$ (orange crosses) define the TR (red square), using both objective and constraint models. A new candidate (orange dot) is proposed within the TR, and the model are updated after evaluation (rightmost panel). Rank samples. For all feasible samples $x _ { i } ~ \in ~ { \mathcal { F } }$ , define the ranking by the true objective value $f ( x _ { i } )$ (lower is better for minimization). Let the feasible samples be sorted such that $f ( x _ { 1 } ^ { \mathrm { f e a s } } ) ~ \leq ~ f ( x _ { 2 } ^ { \mathrm { f e a s } } ) ~ \leq ~ \cdots ~ \leq ~ f ( x _ { | \mathcal { F } | } ^ { \mathrm { f e a s } } )$ . For the infeasible samples, we first normalize each constraint dimension over all infeasible points: $\tilde { c } _ { k } ( x _ { i } ) ~ = ~ \frac { c _ { k } ( x _ { i } ) } { \mathrm { m a x } | c _ { k } ( x _ { i } ) | }$ ma𝑐x𝑘 (𝑥𝑖 ) for 𝑥𝑖 ∈ F¯, 𝑘 = $1 , \ldots , K$ . We adopt this definition of normalization to preserve the boundary between feasibility and infeasibility. We then define the maximum normalized constraint violation per sample $v ( x _ { i } ) = \mathrm { m a x } _ { k = 1 , \ldots , K } \widetilde { c } _ { k } ( x _ { i } )$ , and rank infeasible samples by ascending $\boldsymbol { v } ( \boldsymbol { x } _ { i } )$ (smallest violation first): $v ( x _ { 1 } ^ { \mathrm { { i n f e a s } } } ) \leq v ( x _ { 2 } ^ { \mathrm { { i n f e a s } } } ) \leq \cdot \cdot \cdot \leq v ( x _ { | \bar { \mathcal { F } } | } ^ { \mathrm { { i n f e a s } } } )$ . Finally we concatenate the ordered feasible and infeasible samples in $S _ { \mathrm { r a n k e d } } = \left[ x _ { 1 } ^ { \mathrm { f e a s } } , \ldots , x _ { | \mathcal { F } | } ^ { \mathrm { f e a s } } , \ x _ { 1 } ^ { \mathrm { i n f e a s } } , \ldots , x _ { | \bar { \mathcal { F } } | } ^ { \mathrm { i n f e a s } } \right]$ . The procedure described in this step is what defines the ranking metric $r$ in Algorithm 1. Generate the inspectors. We select the top-ranked point as the best candidate solution $\boldsymbol { x } _ { b e s t }$ so far (line 2). Around this point, we sample a set of inspector points $\mathcal { T } = \{ x _ { 1 } , . . . , x _ { N } \}$ by first drawing from a multivariate normal distribution $\mathcal { N } ( 0 , \sigma ^ { 2 } I _ { D } )$ , normalizing each sample to lie on the unit hypersphere, then scaling by a random factor in $[ 0 , R ]$ and finally translating by $x _ { \mathrm { b e s t } }$ , ensuring the inspectors are uniformly distributed within a ball $\boldsymbol { B } ( \boldsymbol { x } _ { \mathrm { b e s t } } , R )$ of radius $R$ centered at $x _ { \mathrm { b e s t } }$ (line 4). Definition of the TR. The inspector population in $\mathcal { T }$ is ranked as described in Step (1), but based on the evaluated models $\mathcal { M }$ and $\{ \mathcal { C } _ { k } , \ \forall k = 1 , \ldots , K \}$ of the objective and $K$ constraints. We hence define the ranked list $\mathcal { R }$ over $\mathcal { T }$ as $\mathcal { R } = \mathsf { r a n k } ( \mathbb { Z } ; \mathcal { M } , \mathcal { C } _ { i } ) = \mathsf { s o r t e d } ( \mathbb { Z } , \mathrm { b } _ { \mathrm { . } }$ y increasing $r ( \boldsymbol { x } ) )$ (line 5). Then, we select the top $P _ { \% }$ inspectors: ${ \mathcal { T } } _ { \mathrm { b e s t } } = \{ x \in { \mathcal { R } } \ | \ \operatorname { r a n k } ( x ) \leq \lceil P \cdot N \rceil \}$ (line 6) and define the # Algorithm 1 FuRBO algorithm Require: Success threshold $\tau _ { s }$ , failure threshold $\tau _ { f }$ , success counter $n _ { s }$ , failure counter $n _ { f }$ , batch size $q$ , inspector percentage $P _ { \% }$ , sample set ${ \mathcal { S } } = \emptyset$ , surrogate model of the objective $\mathcal { M }$ , surrogate model of the constraints $\mathcal { C } _ { k }$ , sampling radius $R _ { : }$ , search space $\Omega$ , initial trust region $\mathrm { T R } = \Omega$ , Thompson sampling AF TS, function to optimize $f _ { : }$ , constraint functions $\scriptstyle c _ { k }$ , ranking metric $r$ 1: Evaluate initial design, update $s$ and train surrogate models $( \mathcal { M } , \mathcal { C } _ { k } )$ 2: $x _ { \mathrm { b e s t } } \arg \operatorname* { m i n } _ { x \in X } r ( x ; f , c _ { k } )$ ⊲ Update best candidate solution 3: while Optimization Budget Not Exhausted do 4: $\mathcal { T } $ UniformBallSamples $( x _ { \mathrm { b e s t } } , R )$ ⊲ Sample inspectors uniformly within $B ( x _ { \mathrm { b e s t } } , R )$ 5: $\mathscr { R } \gets \mathsf { r a n k } ( \mathscr { T } ; \mathscr { M } , \mathscr { C } _ { k } )$ ⊲ Rank inspectors based on $( \mathcal { M } , \mathcal { C } _ { k } )$ 6: $\mathcal { T } _ { \mathrm { b e s t } } \mathrm { T o p } \ P _ { \% }$ of sorted $\boldsymbol { \mathcal { T } }$ ⊲ Select the best inspectors ranked according to $\mathcal { R }$ 7: TR $$ define_TR(Ibest) ⊲ Define TR as the smallest hyperrectangle containing $\mathcal { T } _ { \mathrm { b e s t } }$ 8: $\begin{array} { r l } & { X _ { \mathrm { n e x t } } T S ( ( \mathcal { M } , \mathcal { C } _ { k } ) , \mathrm { T R } , q ) } \\ & { Y f ( X _ { \mathrm { n e x t } } ) } \\ & { C _ { k } c _ { k } ( X _ { \mathrm { n e x t } } ) } \\ & { S S \cup \{ ( X _ { \mathrm { n e x t } } , Y , C _ { k } ) \} } \end{array}$ ⊲ Propose next $q$ configurations to evaluate within the TR 9: ⊲ Evaluate objective function on the new points 10: ⊲ Evaluate constraint functions on the new points 11: ⊲ Update sample set 12: Fit surrogate models $( \mathcal { M } , \mathcal { C } _ { k } )$ over $\Omega$ 13: Update $n _ { s }$ and $n _ { f }$ 14: if $n _ { s } = \tau _ { s }$ or $n _ { f } = \tau _ { f }$ then ⊲ Check if thresholds for radius update is reached 15: 𝑅 ← adjust $( R )$ ⊲ Double/halve the variance of the distribution 16: end if 17: $x _ { \mathrm { b e s t } } \gets \arg \operatorname* { m i n } _ { x \in X } r ( x ; f , c _ { k } )$ ⊲ Update best candidate solution 18: end while 19: Return 𝑥best ⊲ Return best solution TR as the smallest hyperrectangle that contains all points in $\mathcal { T } _ { \mathrm { b e s t } }$ . Let $\begin{array} { r } { x _ { j } ^ { \mathrm { m i n } } = \operatorname* { m i n } _ { x \in \mathcal { T } \mathrm { b e s t } } x _ { j } } \end{array}$ and $x _ { j } ^ { \mathrm { m a x } } = \operatorname* { m a x } _ { x \in \mathcal { T } _ { \mathrm { b e s t } } } x _ { j }$ , for $j = 1 , \ldots , d$ , then $\begin{array} { r } { \mathrm { T R } = \prod _ { j = 1 } ^ { d } \big [ x _ { j } ^ { \operatorname* { m i n } } , x _ { j } ^ { \operatorname* { m a x } } \big ] } \end{array}$ (line 7). Find new candidate solutions, update sample set and posterior distributions. Following the SCBO algorithm, we use TS over the surrogate models $( \mathcal { M } , \mathcal { C } _ { k } )$ , restricted to the current TR, to propose a batch of $q$ new points to evaluate (line 8). We then evaluate both the objective function $f$ and the constraint functions $\scriptstyle c _ { k }$ at the proposed batch points $X _ { \mathrm { n e x t } }$ (lines 9-10). We update the set of evaluated samples $s$ (line 11) and we refit the surrogate models $\mathcal { M }$ and $\mathcal { C } _ { k }$ over the full search space $\Omega$ using the updated sample set $s$ (line 12). We use the radius $R$ of the uniform distribution to dynamically adjust the scale of the search around $x _ { \mathrm { b e s t } }$ . In the distribution $\mathcal { T } \sim \mathrm { U n i f o r m } ( x _ { \mathrm { b e s t } } , R )$ , we initialize $R = 1$ . Given that the domain $\Omega$ is normalized to $[ 0 , 1 ] ^ { D }$ in our implementation, the initial distribution of samples covers the entire domain, regardless of the exact position of $x _ { \mathrm { b e s t } }$ . Similarly to SCBO, two counters are maintained to track optimization progress (line 13): $n _ { s }$ , the number of successes (iterations where the best solution improves), and $n _ { f }$ , the number of failures (iterations without improvement). At each iteration, the radius is updated based on the following rules (lines 14-15): it is doubled if $n _ { s } = \tau _ { s }$ it is halved if $n _ { f } = \tau _ { f }$ , it remains unchanged otherwise. After each update, both counters are reset $( n _ { s } \gets 0 , n _ { f } \gets 0 )$ . The thresholds $\tau _ { s }$ and $\tau _ { f }$ are user-defined hyperparameters controlling the frequency of zoom-in/zoom-out behavior. A very small radius indicates stagnation or convergence. Optimization will be stopped or restarted when $R \leq \varepsilon$ , with $\varepsilon$ chosen by the user. # 5 Experiments # 5.1 Experimental setup We evaluate the performance of FuRBO against the following state-of-the-art methods: Scalable Constrained Bayesian Optimization (SCBO) by Eriksson and Poloczek [2021], constrained Expected Improvement (cEI) introduced by Schonlau et al. [1998], Constrained Optimization by Linear Approximation (COBYLA) from Powell [1994], constrained Covariance Matrix Adaptation Evolution Strategy (CMA-ES) by Hansen [2006], and a Random Search (for the URLs of the used implementations, see the References; our code is available on GitHub2). We compare these algorithms on the constrained black-box optimization benchmarking (BBOB-constrained) suite from the COCO package [Hansen et al., 2021]. The results of the constrained BBOB benchmark for FuRBO and SCBO are discussed in Sec. 5.2.1. The results comparing FuRBO to multiple baselines are discussed in Sec. 5.2.2. In Appendix F, we extend our comparison between FuRBO and SCBO to include the 30-dimensional Keane bump synthetic benchmark, as well as several physics-inspired problems with dimensionalities ranging from 3 to 60. Constrained BBOB. We use the COCO/BBOB-constrained benchmark [Hansen et al., 2021], comprising 4,860 constrained black-box functions generated by combining 9 base functions, 6 constraint sets of increasing severity, 6 dimensions, and 15 instances [Dufossé and Atamna, 2022]. The functions are defined on a continuous search space and present different landscape characteristics (separable, ill-conditioned, multi-modal functions). For our evaluation, we use the full suite to compare FuRBO with its closest relative, SCBO. We consider 3 instances and 10 repetitions with different random seed per function-constraint combination in dimensions 2, 10, and 40. For comparisons against all the mentioned baselines, we select three representative functions from the suite: Sphere (separable), Bent Cigar (ill-conditioned), and Rotated Rastrigin (multimodal) in 10 dimensions, each with medium-complex constraint structures. The same experimental setup (initial design size 3D for the BO-based algorithms and initial sample 30D) is applied to all algorithms. Baselines Setup. FuRBO and SCBO are evaluated on these functions, each repeated 10 times with different initial designs and random seeds. The initial design size is 3D, the batch size is 3D, and the total evaluation budget is 30D, where D is the problem dimension. FuRBO uses a TR defined from the top $10 \%$ of inspectors sampled around the current best solution. The sampling radius for the inspectors $R$ is initialized to 1, doubled when the success counter reaches $\tau _ { s } = 2$ , and halved when the failure counter reaches $\tau _ { f } = 3$ . Optimization restarts if $R$ reaches a minimum threshold $\varepsilon = 5 \times 1 0 ^ { - 8 }$ . CMA-ES and COBYLA are initialized using the default hyperparameter settings recommended by their respective implementations [Hansen et al., 2019, Virtanen et al., 2020]. For random sampling, we use a uniform distribution over the search space. Performance metrics. We evaluate performance in terms of loss (simple regret), averaging over 30 runs with one standard error, and CPU time. Any feasible solution is preferred over infeasible ones, which are assigned the worst observed objective value for a specific problem setting across all compared methods [Hernández-Lobato et al., 2017]. CMA-ES and COBYLA are initialized from the best point from the initial sample set generated for the BO methods. Hardware and Runtime. All experiments are conducted on an Intel i9-12900K 3.20GHz CPU. To provide an example of FuRBO runtime, the compute time for the constrained BBOB 10D functions spanned from 5.2 sec to 250 sec on CPU, depending on the complexity of the function landscape and the severity of the constraint. This gave a total of $3 3 \mathrm { h }$ on CPU. # 5.2 Results 5.2.1 Constrained BBOB in 10D. Figure 2 presents the loss (simple regret) convergence curves for FuRBO and SCBO across the full constrained BBOB suite in 10 dimensions. Both algorithms are run with a batch size $q = 3 D$ , and total evaluation budget of $3 0 D$ to mimic real-world scenarios where function evaluations rely on very expensive procedures that can be run on parallel nodes (in Appendix E.4 we provide an ablation study on the batch size $q$ ). Figure 2: Loss convergence curve on the full constrained BBOB suite at 10D. Results are averaged across 3 instances with 10 repetitions each. The plot shows the mean loss with shaded areas indicating one standard error. FuRBO consistently outperforms SCBO on more severely constrained problems and performs comparably on easier ones. Overall, FuRBO consistently outperforms SCBO on problems with a higher number of constraints and active constraints (rightmost columns), indicating its superior performance in severely constrained scenarios. Notably, for configurations with 17 or more constraints, FuRBO achieves faster convergence and lower final regret. For simpler problems (leftmost columns with 1–3 constraints), FuRBO performs comparably to SCBO, and in some cases the two methods are nearly indistinguishable in terms of convergence speed and final performance. The exact final performances of FuRBO and SCBO are reported in Table 2 in Appendix A, where we also assess statistical significance using the Wilcoxon rank-sum test. The analysis confirms that FuRBO achieves significantly better performance on the majority of the 10D problems. Similar figures to Figure 2, but for 2 and 40 dimensions are available in Appendix C. While FuRBO and SCBO perform similarly in 2D, FuRBO shows clear superiority in 40D, where it succeeds in finding feasible solutions in cases where SCBO fails within the given evaluation budget. However, in the most strongly constrained scenarios, FuRBO also struggles to identify feasible regions, suggesting that the chosen hyperparameter settings may be not optimal for such cases. We will further investigate this. Figure 3: Convergence comparison of FuRBO against SCBO, CEI, COBYLA, CMA-ES, and random sampling on 𝑓sphere, 𝑓bent_cigar, and $f _ { \mathrm { r a s t \_ r o t } }$ in 10D. Curves show the mean loss over 10 repetitions of the same instance, with shaded regions indicating one standard error. The improvement observed in highly constrained cases highlights the effectiveness of FuRBO’s feasibility-aware TR strategy. Unlike SCBO, FuRBO defines its TR based on the area predicted to be most promising, considering both feasibility and optimality predicted by the surrogate models over the entire domain, rather than relying solely on the best evaluated sample. This allows the TR to shift more freely across the domain and adapt its size dynamically: it contracts or expands according to the predicted distribution of high-quality, feasible regions. As a result, FuRBO is better able to zoom in on narrow feasible areas and escape local minima, offering faster convergence and more robust performance in complex, constrained landscapes. 5.2.2 Comparison to SOTA baselines. Figure 3 shows the convergence of FuRBO compared to SCBO, CEI, COBYLA, CMA-ES, and random sampling on three representative BBOB-constrained functions in 10 dimensions: Sphere, Bent Cigar, and Rotated Rastrigin. In order to have a comparable setup for all methods, we consider a batch size $q = 1$ for the BO methods (FuRBO, SCBO, and CEI), meaning that only one candidate solution is returned by the AF and evaluated at each iteration. FuRBO consistently achieves the lowest final loss and fastest convergence across all cases, closely followed by SCBO. This highlights that the new definition of the TR introduced in FuRBO is more beneficial when multiple solutions, potentially spread within the TR, are returned at each iteration. On the ill-conditioned 𝑓bent_cigar problem, CEI converges to low-loss regions faster than all baselines, however, FuRBO demonstrates greater exploitation capabilities by converging to a solution with a statistically significant lower value of the objective function. For the multimodal 𝑓rast_rot, FuRBO again outperforms the rest, directly followed by SCBO. CMA-ES and random sampling perform poorly across all functions, highlighting the benefit of using surrogate models in severely constrained and expensive settings. These results are confirmed in Table 4 in Appendix A.
Bayesian optimization is a powerful tool for solving real-world optimization tasks under tight evaluation budgets, making it well-suited for applications involving costly simulations or experiments. However, many of these tasks are also characterized by the presence of expensive constraints whose analytical formulation is unknown and often defined in high-dimensional spaces where feasible regions are small, irregular, and difficult to identify. In such cases, a substantial portion of the optimization budget may be spent just trying to locate the first feasible solution, limiting the effectiveness of existing methods. In this work, we present a Feasibility-Driven Trust Region Bayesian Optimization (FuRBO) algorithm. FuRBO iteratively defines a trust region from which the next candidate solution is selected, using information from both the objective and constraint surrogate models. Our adaptive strategy allows the trust region to shift and resize significantly between iterations, enabling the optimizer to rapidly refocus its search and consistently accelerate the discovery of feasible and good-quality solutions. We empirically demonstrate the effectiveness of FuRBO through extensive testing on the full BBOB-constrained COCO benchmark suite and other physics-inspired benchmarks, comparing it against state-of-the-art baselines for constrained black-box optimization across varying levels of constraint severity and problem dimensionalities ranging from 2 to 60.
[ "cs.LG" ]
# 1 Introduction LLMs have demonstrated remarkable prowess in various tasks [90, 7], yet their applications to finance [40, 78, 75] require the continual push of the limits of model capabilities [80]. Although there are already several specialized benchmarks [76, 36, 5] that evaluate LLM in core financial tasks, they still suffer from two critical limitations. First, they are overwhelmingly monolingual and monomodal. However, real-world financial applications involve mixed inputs that require multilingual semantic parsing, multimodal inputs of text, tables, charts, and audio, and cross-cultural contextual understanding. Secondly, existing benchmarks rely on simple aggregation without difficulty-aware selection, leading to over-weighting of easy and duplicated tasks. For example, in FinBen [76], 8 of 36 datasets are textual analysis tasks, with 7 being simple enough for zero-shot LLMs to exceed $60 \%$ accuracy, causing inflated overall scores and increased evaluation costs while failing to expose model weaknesses and providing meaningful insights. To address these gaps, we introduce MULTIFINBEN, the first unified benchmark that spans three modalities (textual, visual, and audio), three linguistic settings (monolingual, bilingual, and multilingual), and seven task categories at three difficulty tiers, comprising 34 distinct datasets in five languages, including English, Chinese, Japanese, Spanish, and Greek. In contrast to benchmarks built on monolingual or cross-lingual corpora, we introduce the first multilingual financial task, comprising PolyFiQA-Easy and PolyFiQA-Expert, curated from authentic financial reports and news to require joint reasoning over mixed-language inputs, with questions designed and validated by financial analysts through multi-stage reviews to ensure quality and domain fidelity. For multimodal understanding, we develop the first OCR-embedded visual–text dataset covering balance-sheet tables, chart-laden slides, and complex financial imagery. Additionally, we propose a difficulty-aware selection to highlight frontier challenges and avoid overrepresentation by retaining one dataset per modality–language-task tier with the largest inter-model gap. We evaluate 22 frontier models on MULTIFINBEN. Overall, GPT-4o leads with $5 0 . 6 7 \%$ , while monomodal and monolingual models lag far behind. Unimodal models fail on unsupported inputs, underscoring the necessity of cross-modal reasoning. A 10.29 pp gap between multilingual $( 7 . 5 0 \% )$ and monolingual $( 1 7 . 7 9 \% )$ QA reveals that existing models still struggle with cross-lingual generalization in financial contexts. Our structured difficulty design exposes steep performance drops from $3 1 . 2 4 \%$ (easy) to $6 . 6 3 \%$ (hard), reflecting the gap between current model capabilities and real-world financial task complexity. Crucially, newly introduced datasets, such as PolyFiQA-Easy/Expert and the first financial OCR QA task, surface as among the hardest challenges, addressing modality and linguistic gaps overlooked by prior benchmarks. These results demonstrate that MULTIFINBEN not only reveals current model limitations in cross-modal, cross-lingual, and domain-specific applications but also provides a systematic framework to guide future model improvements and the development of harder and more realistic financial datasets. # 2 MULTIFINBEN Benchmark Table 2: Overview of MULTIFINBEN. 1 https://www.sec.gov/ 2 https://github.com/chakki-works/chABSA-dataset 3 https://www.athexgroup.gr/web/guest/company-fin.-statements/ 4 Please see Appendix H.1. 5 https://www.bvl.com.pe/en/home-general 6 https://efpa-eu.org/ 7 https://wp.lancs.ac.uk/cfie/fns2023/ # 2.1 Overview In this section, we present MULTIFINBEN, the first unified benchmark for evaluating LLMs in the financial domain across diverse modalities, linguistic settings, and tasks (Table 2). Structured difficulty-aware benchmarking Rather than aggregating all available datasets from different modalities, linguistics, and tasks, MULTIFINBEN is organized hierarchically across modality, language, and task category, and for each configuration, we retain a single dataset per difficulty level. This selection policy keeps the benchmark compact and interpretable while enabling seamless integration of new tasks, languages, and modalities [67, 16]. To estimate dataset difficulty, we compute the average standardized performance of two reference models: GPT-4o [25] and LLaMA3.1-70B-Instruct [13]. These models are chosen to span the spectrum of current closed- and open-source capabilities, providing a practical and interpretable signal of how challenging a dataset is across the frontier. Based on this average score, we assign each dataset to one of three difficulty levels: easy (average $> 6 0 \AA$ ), medium (20–60), and hard (average $<$ 20). The thresholds are calibrated to reflect distinct performance regimes: scores above 60 typically indicate tasks that both models have largely mastered, scores below 20 indicate consistent failure or poor generalization, and the 20–60 band captures the transitional region where models begin to show meaningful variation (Table 5, Figure 2). This stratification enables layered evaluation, from verifying basic functionality to identifying open challenges and tracking incremental gains. Once difficulty levels are established, we select one dataset per modality–language–task configuration within each tier. To prioritize datasets that are most informative for evaluation, we rank candidates by the absolute performance gap between the two models. Larger gaps indicate tasks that reveal capability boundaries and model-specific weaknesses, which are especially valuable for benchmarking progress. In cases where multiple datasets exhibit similar contrast, we select the one with lower overall performance to preserve headroom for future improvement. Figure 2: Structured difficulty-aware benchmarking of English datasets. Linguistics In MULTIFINBEN, we are the first to evaluate financial LLMs across three linguistic settings: monolingual, bilingual, and multilingual. In the monolingual setting, models operate within a single language; the bilingual setting involves cross-lingual transferring; and the multilingual setting requires processing across multiple languages simultaneously. To support this, we include five typologically and economically diverse languages, such as English, Chinese, Japanese, Spanish, and Greek, balancing linguistic diversity, writing systems, and global financial regions across North America, East Asia, Southern Europe, and Latin America. Modalities In addition to text, we are also the first to include both visual and audio modalities in a unified financial benchmark for evaluating LLMs. In the visual modality, we focus on charts, tabular data, and text embedded in images, reflecting formats commonly found in financial reports and regulatory documents. In the audio modality, we include spoken financial content such as earnings calls, which contain spontaneous speech, domain-specific terminology, and prosodic cues that are essential for understanding intent, sentiment, and narrative context [55, 4]. By evaluating models on both modalities, we enable a more rigorous and realistic assessment of their ability to perform in high-stakes, information-rich financial environments. Tasks categories Inspired by FinBen [77], we organize candidate datasets under a unified taxonomy of seven core financial NLP tasks: Information Extraction $( I E )$ focuses on converting unstructured financial text into structured outputs. Textual Analysis (TA) assesses a model’s ability to interpret sentiment, topic, or tone in financial discourse. Question Answering $( Q A )$ evaluates comprehension of financial content through question answering. Text Generation $( T G )$ focuses on producing coherent, informative, and factually accurate financial text. Risk Management (RM) targets detection or analysis of risk-related signals. Forecasting $( F O )$ measures a model’s ability to predict market trends or investor behavior. Decision-Making (DM) simulates complex financial decision processes. Datasets To support MULTIFINBEN, we aggregate datasets from two sources: newly introduced datasets and existing benchmarks. For new datasets, we introduce PolyFiQA-Easy and PolyFiQAExpert to fill the gap in multilingual financial comprehension and reasoning, and two OCR-based datasets constructed from financial PDFs, the dominant format in global financial communication. For existing resources, we incorporate datasets from FinBen [77], AuditWen [23], FinanceIQ [70], chABSA 2, FLARE-ES [87], Plutus-Ben [54], OpenFinLLMs [79], and FinAudio [5], and apply our difficulty-aware mechanism to ensure balanced, multilingual, and multimodal evaluation. # 2.2 Text Benchmark # 2.2.1 Monolingual For English, we include thirteen datasets from FinBen [77] covering all task categories. For IE, $S C$ [43] detects causality in filings and FinRED [61] extracts relations in news and earnings calls, both evaluated by F1 [64], while FINER-ORD [60] performs named entity recognition using entity F1 [12]. For TA, Headlines [63] extracts actionable signals (avg F1) [64] and TSA [11] conducts sentiment analysis (accuracy [42]). For QA, FinQA [8] and TATQA [91] assess numerical reasoning in financial reports and tables, whereas XBRL-Math [9] focus on equation inference (accuracy [42]). For TG, EDTSUM [78] and ECTSUM [45] summarize financial news and earnings calls, respectively, using ROUGE-1 [37]. For RM, CCF [14] targets fraud detection (Matthews Correlation Coefficient, MCC [44]). For FO, BigData22 [65] addresses stock movement prediction (MCC [44]). For DM, MSFT [84] evaluate LLM trading strategies via Sharpe Ratio (SR) [62]. For Chinese, we include four datasets: RRE [23] for semantic relation extraction (IE); AIE and LNE [23] for audit classification of targets and legal references (TA); and FinanceIQ [88] for multi-choice financial QA, all evaluated by classification accuracy [42]. For Japanese, we include only one public dataset, chabsa [33], from the Japanese financial benchmark [21], which performs sentiment classification in securities filings, evaluated by macro F1 [20]. For Spanish, we include four datasets from FLARE-ES [87]: MultiFin [30] and TSA [51] for sentiment and topic classification (TA, accuracy [42]); EFPA for multiple-choice QA (accuracy [42]); and FNS-2023 [85] for annual report summarization (TG, ROUGE-1 [37]). For Greek, we include four datasets from Plutus-ben [54], spanning IE, TA, QA, and TG. In IE, GRFinNUM [54] perform NER for numeral categories (entity F1 [12]). In TA, GRMultiFin [31] classifies financial headlines (accuracy [42]). In QA, GRFinQA [54] handles domain-specific multiple-choice reasoning (accuracy [42]). In TG, GRFNS-2023 [85] performs financial report summarization (ROUGE-1 [37]). More details are in Appendix F. # 2.2.2 Bilingual Beyond monolingual benchmarks, we incorporate DOLFIN [47], a bilingual financial machine translation (MT) dataset under TG task. To align with our language scope, we include the English–Spanish (En–Es) subset from DOLFIN of 1,932 document-level aligned segments. DOLFIN enables assessment of translation coherence and domain fidelity. However, as a parallel resource, it remains limited to single-direction translation between language pairs and does not involve simultaneous multilingual reasoning. Translation quality is measured using Comet-da-22 [57] (Unbabel/wmt22-comet-da), a reference-based metric optimized for document-level MT. # 2.2.3 Multilingual To address the scarcity of multilingual datasets aligned with our modality-language-task framework, we introduce the first multilingual datasets explicitly designed for multilingual financial reasoning grounded in native-language sources, spanning five economically significant languages (English, Chinese, Japanese, Spanish, and Greek). Our datasets moves beyond existing datasets by directly sourcing native-language financial disclosures and news. This approach preserves real-world complexity and ensures linguistic authenticity, enabling evaluation of LLMs in real-world multilingual financial scenarios. As financial decision-making becomes increasingly cross-border, our datasets offers a critical resource for building globally competent language models. Task definition We define a multilingual financial task that requires models to comprehend over heterogeneous, real-world financial disclosures. Each instance consists of a context $C =$ $\{ R , N _ { \mathrm { e n } } , \bar { N _ { \mathrm { z h } } } , \bar { N _ { \mathrm { j a } } } , \bar { N _ { \mathrm { e s } } } , \bar { N _ { \mathrm { e l } } } \}$ , where $R$ denotes a financial report (10-K or 10-Q) and $N _ { \mathrm { l a n g } }$ represents contemporaneous news articles in English, Chinese, Japanese, Spanish, and Greek. These news articles are published near the financial report’s release date and thematically relate to the corresponding report. Given a carefully designed natural language question $q$ and the associated multilingual context $C$ , the model must generate an answer $a$ grounded in integrated multilingual information. This task challenges models to perform multilingual cross-document reasoning, integrating disparate pieces of financial signals expressed across diverse languages, as real-world financial analysts must do under time-sensitive, high-stakes conditions. Data source We introduce two novel datasets: PolyFiQA-Easy and PolyFiQA-Expert, targeting distinct levels of reasoning complexity. Financial reports are sourced from the SEC, covering 10-K and 10-Q filings 3. To maintain focus and reduce noise, we extract three core statements from lengthy financial reports: Comprehensive Income, Consolidated Balance Sheets, and Cash Flows 4. Native-language news articles are collected based on publication date proximity and topic alignment (Appendix H.1). For low-resource languages with sparse coverage, expert-generated news-style texts are validated by native speakers and financial professionals (Appendix H.2). Expert-in-the-loop data construction To ensure benchmark fidelity and domain rigor, we adopt an expert-in-the-loop pipeline. Three financial professionals (Appendix H.5) with expertise in economics, business, and accounting oversaw all phases of construction, including news selection, question authoring, guideline development, annotation, and quality control. News articles were meticulously screened for strong alignment with financial reports. Questions were crafted to anchor in real analytical tasks and span two difficulty tiers: easier questions in PolyFiQA-Easy and more complex ones in PolyFiQA-Expert (Appendix H.3). This structure supports fine-grained model assessment across reasoning levels. Rigorous tier-specific annotation guidelines (Appendix H.4) were refined through iterative pilot annotation rounds including tier-specific structures and formatting protocols to promote inter-annotator consistency. In total, 57 expert hours were logged in Label Studio (Appendix H.6), establishing a high quality, reproducible, auditable, and streamlined workflow aligned with best practices in benchmark creation. The raw datasets were further converted into structured instruction datasets with task-specific prompts thoughtfully crafted by financial professionals (Appendix H.9). Quality validation Evaluating the quality of free-text generation datasets remains a persistent challenge in both QA and abstractive summarization. To ensure annotation reliability, we adopt a structured scoring framework inspired by prior summarization benchmarks [68, 17], evaluating each instance along three key dimensions: Relevance (scored 1–4) captures whether the response includes the key information required to answer the question. Consistency (scored 1–3) measures factual accuracy, especially numerical values. Each question is initially annotated by one expert and independently scored by two additional reviewers using detailed, pilot-refined guidelines (Appendix H.7). Only responses with cumulative scores above 8 are retained in the final dataset. To validate scoring reliability, we report inter-annotator agreement as normalized difference percentage across dimensions. PolyFiQA-Easy and PolyFiQA-Expert achieved average inter-annotator agreements of $8 9 . 3 8 \%$ , $9 1 . 2 1 \%$ , demonstrating the benchmark’s high quality and scoring consistency. Evaluation metric We adopt ROUGE-1 [37] to measure unigram overlap between model predictions and references, offering a proxy for content coverage and factual alignment in multilingual QA tasks. # 2.3 Vision Benchmark For the vision modality in the financial domain, we introduce two novel datasets, EnglishOCR and SpanishOCR, under the IE task, and incorporate an existing dataset, TableBench [79], under the QA task from Open-FinLLMs [79]. TableBench [79] evaluates multimodal reasoning over tabular images with 450 questions covering comparison and data retrieval, reflecting realistic financial decision-making scenarios. # 2.3.1 Optical Character Recognition (OCR) PDF is the predominant format for disseminating financial content such as reports, regulatory filings, and legal documents. However, current financial vision tasks [79] largely focus on image understanding rather than the complex structure of document-based financial information. To address this gap, we introduce the first financial OCR task aimed at converting scanned financial PDF files (in image format) into structured HTML text. We further construct two novel datasets in English and Spanish, the two most widely used languages in financial communication within the United States. Task definition The OCR task is defined as a structured information extraction problem from document images. Each financial PDF document is segmented into a set of page-level images $\{ I _ { 1 } , I _ { 2 } , \ldots , I _ { n } \}$ , where each image $I _ { i }$ corresponds to a single page. The model processes each image individually and generates a corresponding HTML-formatted text sequence $T _ { i }$ , such that $T _ { i } \doteq \mathsf { O C R } ( I _ { i } )$ . The objective is not only to recognize the textual content but also to recover the structural and semantic layout embedded in the document, including headings, tables, figures, and paragraph segmentation. Data source Following the task design, we construct two novel datasets: EnglishOCR and SpanishOCR. The EnglishOCR dataset was built using U.S. SEC EDGAR filings, which are primarily available in HTML format. For documents without a native PDF version, we used wkhtmltopdf to render PDF files from the corresponding HTML. Each document was then parsed into page-level PNG images along with its corresponding HTML content. To align each image with the most relevant HTML snippet, we applied cosine similarity using a Sentence-BERT (SBERT) model on OCR-extracted text and HTML sentences. The SpanishOCR dataset was constructed using source PDFs obtained from Peruvian public regulatory documents. Each document was parsed into page-level PNG images and the corresponding HTML content was generated by applying OCR to extract text and then wrapping it in appropriate HTML tags to preserve basic document structure. Evaluation metric We evaluate model outputs using ROUGE-1 [37], a unigram overlap metric that measures lexical similarity between model predictions and references. # 2.4 Audio Benchmark To evaluate LLM and AudioLLM’s performance on financial audio scenarios, we include two English audio datasets from FinAudio [5] under the TG task, covering Automatic Speech Recognition (ASR) and speech summarization tasks. For ASR, the MDRM [55] dataset consists of short financial audio clips from earnings conference call recordings, comprising 22,208 clips (87 hours). This dataset assesses transcription accuracy using Word Error Rate (WER) [53]. For Speech Summarization, the FinAudioSum [5] dataset includes 64 long financial audio recordings (55 hours) paired with abstractive summaries. Performance is evaluated using ROUGE-L [37]. More details in Appendix G. # 3 Experiments Table 3: Overview of evaluated models with access type, modality, language scope, and MOF class. 1 The model is fine-tuned from other organizations’ models, and its MOF class is evaluated only on the fine-tuned portion. Models We include 22 models (Table 3) across text, vision, audio, and multimodal tasks, considering their openness under the MOF framework $[ 7 4 ] ^ { 5 }$ . For comparison, we include closed-source models, GPT-4o [25] and GPT-o3-mini [50], which do not meet MOF Class III. The open-source models mostly fall under Class III, including multilingual multimodal models (Llama-4 [1], Gemma-3 series [71], Qwen2.5-Omni [81]), text-only multilingual models (Llama-3.1-70B [13], Deepseek-V3 [38], Qwen2.5 [83]), and monolingual financial models (FinMA [78], XuanYuan [70], DeepSeek-R1- Distill-Japanese [26], FinMA-ES [87], Plutus [54]). For vision-language, we assess Qwen-VL-Max [2], DeepSeek-VL [41], and LLaVA-v1.6-Vicuna-13B [39], the only model partially meeting Class II. Audio-language models include Whisper-V3 [56], Qwen2-Audio-7B, Qwen2-Audio-7B-Instruct [10], and SALMONN-7B/13B [69]. Implementation details To ensure evaluation integrity, we customize our evaluation pipeline based on the LM Evaluation Harness [15]. GPT and Together-hosted models (DeepSeek, Llama-4, Gemma-3) are queried via official APIs with temperature set to 0. All other open-source models are deployed and evaluated locally using vLLM [34] on GPUs. Including the OpenAI and Together AI API costs, the total expenditure amounts to approximately $\$ 80,000$ . # 4 Results Table 4: Standardized performance of evaluated LLMs on the MULTIFINBEN. Table $4 ^ { \textup { 6 } }$ presents model performance on the MULTIFINBEN benchmark. For fair comparison, we additionally report modality-balanced overall scores for each model. We highlight several key findings below: MULTIFINBEN presents a substantial challenge for state-of-the-art LLMs. Despite its stateof-the-art performance, GPT-4o achieves an overall score of only $5 0 . 6 7 \%$ on the proposed MULTIFINBEN benchmark, revealing significant limitations even for leading models. The runner-up, Qwen-2.5-Omni-7B, follows with a score of $3 5 . 3 9 \%$ , highlighting the steep challenge posed by this comprehensive evaluation. Notably, both models are multimodal and multilingual, underscoring the necessity of such capabilities for competitive performance. In stark contrast, monomodal and monolingual models exhibit markedly inferior results. The best-performing text-only model, Llama-3.1-70B, achieves merely $1 4 . 0 7 \%$ . Modality-specific models such as Whisper-V3 (audio-only, $1 7 . 1 9 \%$ , Deepseek-VL-7B-Chat (vision-only, $6 . 3 7 \%$ ), and LLaVA-v1.6-Vicuna-13B (English-only, $6 . 5 9 \%$ further emphasize the substantial performance gap. These disparities highlight the inherent limitations of models lacking robust cross-modal and cross-lingual reasoning, underscoring the critical need for integrating multilingual and multimodal insights in financial decision-making, as rigorously stress-tested by our MULTIFINBEN. Multimodal models show trade-offs across modalities. while GPT-4o and Qwen-2.5-Omni-7B lead overall on MULTIFINBEN, their rankings in text-only tasks drop to 2nd and 6th, behind the text-specialized Llama-3.1-70B. This trade-off becomes more pronounced in mid-tier models like gemma-3-4b (11th overall, 13th in text) and gemma-3-27b (7th overall, 12th in text), reflecting the difficulty of maintaining text-specific optimization when broadening to multiple modalities. In contrast, multimodal models decisively outperform unimodal baselines in vision and audio tasks, i.e, GPT-4o achieves $5 5 . 5 4 \%$ in vision and $5 5 . 5 6 \%$ in audio, far surpassing modality-specific counterparts. The asymmetric performance reveals that while text tasks benefit from data maturity, vision and audio gain more from multimodal integration, making unified models essential for complex financial real-world applications. (a) Performance across modalities:(b) Performance across languages:(c) Performance across difficulty Audio, Vision, Text. EN, ZH, JA, ES, EL, BI, MU. levels: Easy, Medium, Hard. Figure 3: Radar charts comparing model performance across (a) modalities, (b) languages, and (c) difficulty levels. The figures demonstrate diverse strengths and limitations of models in various dimensions. Monolingual models underperform in multilingual settings. English-only models, such as finma7b-full $( 8 . 9 1 \%$ , 15nd), consistently trail bilingual/multilingual models like Llama3.1-XuanYuan $1 0 . 4 2 \%$ , 12th), FinMA-ES $( 9 . 6 5 \%$ , 14th), and plutus $( 1 1 . 8 2 \%$ , 10th). This performance gap widens between FinMA-7b-full and plutus further in bilingual $6 9 . 2 4 \%$ vs. $9 1 . 5 9 \%$ ) and multilingual $( 3 . 1 0 \%$ vs. $7 . 2 4 \%$ ) evaluation scenarios, reflecting the critical impact of cross-lingual capacity. Moreover, language-specific performance aligns closely with training data coverage. High-resource languages (e.g., English, Chinese, Japanese, Spanish) are dominated by generalist frontier models such as GPT4o and Llama-4, while domain-specialized models like plutus achieve superior results in low-resource languages, as evidenced in Greek $6 0 . 1 9 \%$ vs. Llama-4’s $4 8 . 9 5 \%$ ). These findings establish that multilinguality is not merely an auxiliary feature but a foundational requirement for LLMs in finance, positioning MULTIFINBEN as the first comprehensive benchmark to diagnose and drive progress in multilingual financial language understanding. Structured difficulty-aware benchmarking facilitates dynamic evaluation. While early models such as finma-7b-full perform competitively on easy tasks $( 4 9 . 4 8 \% )$ , performance declines sharply on medium $( 2 2 . 0 1 \% )$ and hard tasks $( 9 . 4 9 \% )$ , with newer models like GPT-4o showing improvements yet continuing to struggle. To address the lack of coverage in existing benchmarks, we introduce PolyFiQA-Easy and PolyFiQA-Expert, which are the first multilingual financial datasets, and the first financial OCR task, explicitly designed to target underrepresented linguistic and modality challenges. These datasets, initially created to fill specific gaps, emerged as some of the most difficult evaluation tasks, with Deepseek-V3 achieving only $4 2 . 5 8 \%$ and $3 1 . 4 0 \%$ on PolyFiQA-Easy and PolyFiQAExpert, and overall model averages of $7 . 5 0 \%$ and $5 . 6 1 \%$ , respectively. This finding highlights how MULTIFINBEN’s structured design not only reveals current model limitations but also serves as a practical framework to systematically guide the development of new datasets and scenarios, ensuring benchmarks evolve in step with model capabilities and real-world financial demands.
Recent advances in large language models (LLMs) have accelerated progress in financial NLP and applications, yet existing benchmarks remain limited to monolingual and unimodal settings, often over-relying on simple tasks and failing to reflect the complexity of real-world financial communication. We introduce MultiFinBen, the first multilingual and multimodal benchmark tailored to the global financial domain, evaluating LLMs across modalities (text, vision, audio) and linguistic settings (monolingual, bilingual, multilingual) on domain-specific tasks. We introduce two novel tasks, including PolyFiQA-Easy and PolyFiQA-Expert, the first multilingual financial benchmarks requiring models to perform complex reasoning over mixed-language inputs; and EnglishOCR and SpanishOCR, the first OCR-embedded financial QA tasks challenging models to extract and reason over information from visual-text financial documents. Moreover, we propose a dynamic, difficulty-aware selection mechanism and curate a compact, balanced benchmark rather than simple aggregation existing datasets. Extensive evaluation of 22 state-of-the-art models reveals that even the strongest models, despite their general multimodal and multilingual capabilities, struggle dramatically when faced with complex cross-lingual and multimodal tasks in financial domain. MultiFinBen is publicly released to foster transparent, reproducible, and inclusive progress in financial studies and applications.
[ "cs.CL" ]
# I. INTRODUCTION researchers and engineers today, offering a framework for understanding how simple, decentralized agents can collectively produce complex, emergent behaviors. Traditional swarms, as observed in nature—–such as flocks of birds, schools of fish, or colonies of ants—–are characterized by local interactions among agents following simple rules. These interactions give rise to global patterns and adaptive behaviors that are greater than the sum of their parts. However, the term “swarm” has recently been appropriated in novel contexts, such as OpenAI’s Swarm (OAS) framework, where the dynamics and mechanisms differ significantly from their traditional counterparts [1]. This paper seeks to explore these disparities, examining how the principles of traditional swarms contrast with the modern usage of the term in artificial intelligence (AI) systems like OAS. OAS leverages large language models (LLMs), such as ChatGPT [2] or local LLMs like Meta’s LLaMa [3]. The integration of LLMs into OAS introduces new considerations, particularly in terms of computational power, cost, and scalability. Although cloud-based models like ChatGPT offer extensive capabilities, they often come with higher operational costs and latency compared to local LLMs. At the heart of traditional swarms is the idea of decentralization [4], [5]: Each agent operates independently, responding only to its immediate environment and neighbors, yet collectively achieving robust and scalable outcomes. In contrast, systems like OAS often involve agents that act on the entire system sequentially, passing tasks or information between one another in a more structured, interdependent manner. Our goal is not to reduce the comparison to sequential versus parallel processing, but rather to examine whether and how LLMdriven agents can embody the principles of decentralization that define traditional swarm systems. By comparing these two paradigms, we aim to highlight the unique strengths and limitations of each approach. Specifically, we will discuss where LLMs excel—–such as in handling complex reasoning and contextual awareness—–and where traditional swarm intelligence continues to thrive through simplicity, robustness, and truly emergent behavior. Ultimately, this exploration will shed light on the evolving meaning of “swarm” in the context of modern AI and its implications for future research and applications. We begin by describing the foundations of swarm intelligence and the OAS framework. This is followed by a description of the system architecture and implementation of both classical and LLM-driven swarms. Cloud-based and locally hosted LLM platforms are then evaluated to identify the most effective deployment strategy for swarm simulation. The selected approach is then used in a comparative analysis against classical swarm algorithms, highlighting differences in performance, behavior, and system demands. We finally conclude with a discussion on the broader implications of integrating LLMs into swarm-based systems. # II. BACKGROUND Inspired by natural phenomena such as bird flocks and fish schools, swarm behavior offers a decentralized, adaptive, and scalable approach to problem solving. This section provides an overview of traditional swarm intelligence principles, including emergence, local interactions, and decentralization. It also examines the OAS framework, which streamlines multiagent collaboration and coordination. Finally, the section explores the choice between local and cloud-based LLMs when working with OAS framework, highlighting their respective advantages and limitations. # A. Traditional Swarm behavior Swarming, in its true sense, refers to the collective behavior of multiple agents that interact based on simple local rules, leading to emergent, global patterns. These systems draw inspiration from biological collectives such as insect swarms. The fundamental principles of swarming include [5]: 1) Reliance on Simple Rules: Swarm intelligence is driven by a small set of simple rules that collectively generate sophisticated behaviors. Examples include ant colony optimization (ACO) [6], where ants use pheromone trails to find optimal paths, and Reynolds’ Boids model [7], which simulates flocking behavior with three basic rules (alignment, separation, and cohesion). 2) Local Interactions: Agents in a swarm operate based on local interactions with their immediate neighbors rather than possessing a global view of the entire system. 3) Decentralization: Swarm intelligence relies on fully decentralized coordination. There is no single leader directing the swarm’s decisions; instead, behaviors emerge autonomously through distributed interactions. This decentralized structure enhances scalability, robustness, and fault tolerance, making swarm-based systems highly resilient to individual failures. 4) Emergence: Swarming is an emergent phenomenon, meaning that coordinated behaviors arise naturally from local interactions. Individual agents follow simple interaction rules, leading to collective intelligence as seen in nature, where collectives exhibit dynamic behavior without a designated leader. 5) Adaptive and Dynamic Behavior: Swarm systems are inherently adaptive, allowing them to respond dynamically to changing environments. Unlike traditional automation systems that require predefined instructions, swarms adjust their behavior in real time based on environmental stimuli. This makes them particularly useful in applications such as search and rescue, exploration, and robotic automation, where flexibility and real-time decision-making are crucial [8]. # B. OpenAI’s Swarm Framework OAS is an experimental framework designed to simplify the creation and orchestration of multi-agent systems [1]. The goal of OAS framework is to provide an easily testable agent coordination and execution. This makes it an ideal entry point for integrating AI into swarm-based environments, enabling experimentation with AI systems. The key features of OAS are as follows. 1) Multi-Agent Coordination: Swarm allows for the development of interconnected AI agents, each capable of handling specific tasks or functions. These agents can transfer control to each other sequentially. 2) Client-Side Operation: Swarm operates entirely on the client side, leveraging OpenAI’s API to interact with its language models, such as GPT-4o. Additionally, the API supports integration with local LLMs like LLaMa [3] and Qwen [9]. This design choice makes the framework stateless between calls, offering control. 3) Function Calling and Handoffs: The framework supports function calling and agent handoffs, allowing agents to delegate tasks and manage workflows. This feature is particularly useful for creating systems that require specialized knowledge or actions. # C. Local LLMs and Cloud-Based GPT Local LLMs provide a cost-free alternative to cloud-based models such as OpenAI’s GPT. Although ChatGPT requires a monthly subscription for unlimited prompt queries and access to the latest models, it does not include an API key, which comes at an additional cost. In contrast, local LLMs can be deployed and used for free, offering unrestricted access and flexibility without recurring expenses. 1) Local LLMs: Numerous local LLMs are actively being developed and utilized across various applications. These models come in different sizes, indicated by the number of parameters they contain, with versions ranging from 1B (1 billion parameters) to larger models like 7B, 13B, 60B, and so on [10]. The number of parameters in a model directly influences its performance, with larger models typically offering better task accuracy and handling of complex language tasks, but at the cost of increased memory and computational requirements. The model size depends on whether single-precision floating point (FP32) or quantization is used [11]. FP32 provides higher precision but requires more memory, hence more powerful GPUs. In contrast, quantization reduces the model size by using fewer bits per parameter at the cost of precision. Common precision reduction methods include FP16 (2 bytes), while quantization methods include Q8 (1 byte) and Q4 (0.5 bytes). The memory requirement $M$ (in GB) for a model with $N$ parameters is given by: $$ M = \frac { N \times b } { 8 \times 1 0 2 4 ^ { 3 } } $$ where $b$ is the number of bits per parameter and $1 0 2 4 ^ { 3 }$ represents the number of bytes per GB. Using Equation 1, for a model with 1 billion parameters, the memory requirement without quantization (FP32) would be $\approx 3 . 7 \ \mathrm { G B }$ , while the highest quantization (Q4), would result in $\approx 0 . 4 7$ GB. This means that even when the highest quantization is applied, bigger models like LLaMa 90B would require $\sim 4 0$ GB memory. As of February 2025, NVIDIA’s latest consumer GPU, the GeForce RTX 5090, features 32 GB of VRAM [12], which is insufficient for handling larger models even when highest quantization is applied. Running such models requires data center GPUs like the A100 [13] and H100 [14], which offer higher VRAM capacities and support NVLink—a technology that enables multiple GPUs to work together, a feature unavailable in consumer-grade GPUs. Some examples of local LLMs are Meta’s LLaMa, Alibaba’s Qwen, Google’s Gemma [15], and Mistral AI’s Mistral [16]. This section provides an overview of two of these models: LLaMa and Qwen. a) Meta’s LLaMa: LLaMa is a family of LLMs developed by Meta. These models are designed for a wide range of natural language processing (NLP) tasks. Since its initial release in February 2023, LLaMA has evolved through multiple iterations, improving both performance and versatility. LLaMA 3.2 introduced multimodal capabilities, allowing it to process both text and images. Table I presents several variants of LLaMA 3.2 along with their corresponding sizes in FP16 precision. TABLE I: Selected LLaMA 3.2 Model Variants and Their Approximate Sizes (FP16 precision) b) Alibaba’s Qwen: While LLaMA is primarily researchfocused, Qwen is better tailored for commercial applications, particularly in Chinese-speaking markets. Qwen excels in both Chinese and English, with additional support for other languages. However, since LLaMA is primarily focused on English, its performance in English might be stronger. Since its initial release in July 2023, Qwen has also undergone multiple iterations, with Qwen 3 offering multimodal capabilities. The model is available in various sizes, ranging from 0.6B to 235B parameters [17]. 2) Cloud-Based GPT: Cloud-based GPT (generative pretrained transformer) models, such as OpenAI’s GPT-4, are hosted on powerful servers and offer flexible, and scalable access via APIs. Since its release in June 2018, OpenAI’s GPT has undergone significant advancements. Initially introduced as GPT-1 [18], it demonstrated the potential of large-scale unsupervised pretraining. Over time, it evolved with the release of GPT-2 [19] in 2019, which impressed the community with its ability to generate coherent and contextually relevant text. GPT-3 [20], released in 2020, further elevated the model with 175 billion parameters, making it one of the most powerful language models at the time. The latest version, GPT-4, was released in 2023. It improved significantly in accuracy, reasoning, adaptability and multimodal capabilities, enhancing performance on various tasks [21]. Cloud-based GPT-4 differs from local LLMs in several key aspects, with both advantages and disadvantages depending on the use case. The key differences are highlighted in Table II. TABLE II: Cloud-Based vs. Local LLMs # III. SYSTEM DESCRIPTION This section provides a detailed overview of the system architecture, hardware and software configurations, the LLMs employed, and the implementation details of both traditional and LLM-driven swarm algorithms. It outlines the steps taken from early mathematical evaluations to full algorithmic simulations, highlighting how LLMs were integrated into the OAS framework to emulate decentralized agent behavior. # A. Hardware and Software To begin experimentation, local LLMs were used, as they come with no additional costs beyond existing hardware. This allowed for familiarization with OAS framework. The most critical hardware component was the GPU—an NVIDIA 4070 with 8GB of VRAM. This meant that models exceeding 7 billion parameters would suffer from poor performance unless heavily quantized. While memory offloading to system RAM was an option, it significantly degraded performance in terms of computational time. LM Studio [22] was used to load the models, and OAS was modified slightly to integrate local LLMs instead of utilizing ChatGPT API. Table III summarizes the system specifications used for the experiments. TABLE III: System Specifications # B. Utilized LLMs The models tested included LLaMA 1B, 3B, 7B, and a quantized 14B, along with similarly sized Qwen models. After obtaining initial results and developing familiarity with OAS, the experimentation transitioned to ChatGPT. The ChatGPT API was utilized to access the latest models, and results were subsequently collected and analyzed. # C. Swarm Algorithms The Boids algorithm and the ACO algorithm were selected for the comparative analysis. These two were chosen because they clearly demonstrate how swarm systems work. Boids is a rule-based algorithm that simulates flocking behavior, while ACO is a heuristic inspired by how ants find optimal paths. Using these two, we can easily highlight the differences between traditional swarm methods and approaches based on LLMs. a) The Boids Model: simulates emergent flocking behavior using three simple rules where each agent (or “boid”) follows: 1) Separation: Each boid steers away from nearby neighbors to avoid overcrowding. This is done by checking the positions of surrounding boids within a certain radius and applying a repulsive force. 2) Cohesion: Each boid moves towards the average position of its local neighbors. This encourages group cohesion and helps maintain the structure of the flock. 3) Alignment: Each boid adjusts its velocity to match the average velocity of its nearby neighbors, resulting in coordinated and aligned movement within the group. To implement this model, a Python-based simulation was developed mimicking the traditional swarming approach, where each boid independently applied these three rules at each time step to update its position and velocity. To explore the LLM-based alternative, the same behavioral logic was replicated using LLM prompts, where each boid is treated as an independent reasoning unit. Each boid issues three separate prompts, one for each rule, to update its state based on nearby agents. This setup aims to simulate a fully decentralized system, similar to the traditional implementation, but driven by natural language reasoning instead of hard-coded logic. b) The ACO Model: is inspired by the emergent foraging behavior of real ants, particularly how they find the shortest paths between their nest and food sources. The model relies on agents (ants) that collectively discover optimal solutions through indirect communication using pheromones. Each ant follows three key rules: 1) Path Selection: Each ant chooses a path based on a probabilistic decision rule that considers the amount of pheromones on each possible path and the distance to the destination. Paths with higher pheromone concentration and shorter distances are more likely to be selected. 2) Pheromone Update: After completing a path, ants deposit pheromones on the edges they traveled. This reinforces the chosen path, making it more attractive to other ants in future iterations. 3) Pheromone Evaporation: Over time, pheromone levels on all paths decrease due to evaporation. This prevents the algorithm from converging too early on suboptimal paths. Following the same approach as in the Boids’ model, a baseline implementation was developed using hard-coded rules in Python for comparison. Each ant followed the above logic to simulate decentralized swarm behavior. In parallel, an LLMbased version was constructed in which each agent performed the same three operations: path selection, pheromone update, and evaporation, through separate LLM prompts. # IV. LLM PLATFORM EVALUATION To effectively evaluate how LLM-based swarms compare to traditional rule-based swarms, it was first necessary to identify the most suitable LLM deployment strategy. This involved comparing locally hosted models with cloud-based ChatGPT to determine which provided the best performance. A key part of this process was developing an effective prompt design strategy to ensure consistent and reliable responses across models. The platforms were then evaluated based on response latency and system resource utilization. The approach that demonstrated the best overall performance was selected for the final comparison against traditional swarm implementations. # A. Prompt Design Strategy Prompt engineering played a crucial role in obtaining the expected results. During implementation, it was crucial that the model output follow a specific format. This ensured that each agent correctly processed the data and passed it to the next agent in the loop. Any deviation in formatting led to execution issues, necessitating precise prompt instructions. After encountering some inaccuracies, an interesting observation was made: the models could generate effective prompts for themselves when provided with a detailed explanation of the requirements. For example, a math-tuned 1B LLaMA model successfully generated a suitable prompt after being instructed that the coordinate value for $x$ should be set to 0 only when its value in the given coordinates exceeded 300; otherwise, it should remain unchanged. Additionally, it was specified that the output should mirror the input format for seamless code integration. The generated prompt was as follows: Check if $\mathbf { \Psi } _ { x }$ in $( x , y )$ i $\mathrm { ~ s ~ } > ~ 3 0 0$ . If not, keep it unchanged. Otherwise, set $x$ to 0. Output $( x , y )$ without extra text. My pair is {coordinates}. # B. Local LLMs vs ChatGPT The smaller models with 1B parameters were initially tested for both Qwen and LLaMA. It was observed that these models performed poorly, often miscalculating increments and struggling with large numbers. For example, when asked to increment the coordinates (12454, 213332) by (20, 20), the model returned (12674, 13332). This issue was partially addressed using the math-tuned LLaMA 1B model. As expected, larger models exhibited better comprehension and accuracy, although at the cost of increased computational time. The best performance among the locally hosted models was achieved by Qwen 2.5 Instruct model with 14 billion parameters. Although it exceeded available VRAM and required offloading to system RAM, it delivered consistent results at the cost of increased computational time. The model showed strong performance on basic mathematical operations and demonstrated better language comprehension compared to similar and smaller models. It was therefore selected as the representative local LLM for comparison with ChatGPT (GPT-4o-mini) in subsequent evaluations. To evaluate performance, the Path Selection rule from the ACO model was selected. For simplicity, only two path options were used: a short and a long. In language models, token length refers to the number of text units—such as words or word fragments—that the model processes, with longer prompts typically leading to higher latency. Using Qwen’s tokenization scheme, three prompts of varying lengths (approximately 75, 330, and 582 tokens) were crafted using the prompt design strategy described earlier. While differing in verbosity and structure, each prompt produced the same expected output (short or long), enabling a controlled analysis of how prompt length impacts inference time. During testing, CPU, RAM, GPU utilization, and latency were measured for both the local LLM (Qwen 2.5 Instruct, 14B) and ChatGPT. The results are shown in Table IV, while the latency comparison is illustrated in Fig. 1. These results show that using ChatGPT significantly reduces local resource consumption. CPU usage was approximately Fig. 1: Token Length vs. Response Time Latency: Qwen 2.5 vs. ChatGPT TABLE IV: System Resource Usage Comparison: Qwen 2.5 vs. ChatGPT $5 9 . 5 \%$ lower, and RAM usage dropped by $3 8 . 2 \%$ compared to the local Qwen model. Since all computations were handled by OpenAI’s servers, there was no GPU or GPU memory usage on the client side. ChatGPT also achieved lower average latency across all three prompt variations, making it both more resource-efficient and faster for our evaluation against traditional rule-based swarms. # V. COMPARATIVE ANALYSIS To facilitate a comparative analysis between traditional rulebased and LLM-driven swarms, we implemented both the Boids and ACO algorithms. The source code is publicly available1. Each implementation was evaluated under controlled conditions to examine trade-offs in performance, resource consumption, and behavioral results. # A. The Boids Model As discussed in Section III-C, the Boids model was implemented using both traditional rule-based methods and an LLM-driven approach to mimic decentralized swarm behavior. In this system, each boid was responsible for executing three core behaviors: separation, cohesion, and alignment. For the LLM-based implementation, this translated into issuing three separate prompts per boid per time step (see Appendix A). The output vectors from each prompt were parsed and combined to update each boid’s velocity. To ensure a fair comparison, both the classical and LLMbased Boids systems were executed over 10 time steps, using a fixed population of 4 boids. The performance metrics—execution time, CPU usage, RAM usage, and GPU activity—were recorded and are summarized in Table V. The classical Boids model completed all 10 steps in just 0.0019 seconds with minimal resource consumption. In contrast, the LLM-based system, powered by ChatGPT, required 68.61 seconds to complete the same simulation due to the overhead introduced by repeated prompt processing. Since each of the four boids processed three prompts per time step (i.e., 120 prompts in total), this translates to approximately 1.7 seconds per boid to process all three prompts in a single iteration. Since each boid processes prompts independently, total runtime scales roughly linearly with the number of boids. TABLE V: Performance Comparison of Classical vs. LLMBased Boids (10 Time Steps) Figure 2 illustrates the temporal evolution of CPU and GPU usage during the execution of both implementations. These visualizations reinforce the numerical findings, showing that while the classical model incurs a small, sharp spike in CPU usage, the LLM-based version maintains prolonged CPU and memory engagement due to the repeated inference tasks. Fig. 2: CPU and GPU Usage Over Time for Boids Implementations # B. ACO Model Following the Boids implementation, the ACO model was developed using the same methodology outlined in Section III-C. The individual prompts for LLM-driven ACO model are specified in Appendix B. This model was run for 50 time steps to observe convergence behavior, which was not feasible in the Boids model due to the excessive runtime of the LLM-based approach. Unlike the Boids simulation, ACO provided a clear convergence point, identifiable by the pheromone concentration on the chosen paths. This allowed evaluation of optimization performance between the classical and LLM-based swarm systems. Table VI summarizes the execution time and pheromone updates. The LLM-based version took significantly longer to complete (135.76 seconds) compared to the classical rulebased approach (14.03 seconds). However, by the end of 50 steps, the LLM-based swarm had reinforced the optimal (short) path with a stronger pheromone level than the classical counterpart. Notably, to achieve a comparable shortto-long pheromone ratio, the classical system required 179 iterations—over three times as many. However, despite the increased iteration count, the total execution time remained lower at 49.74 seconds. TABLE VI: Performance Comparison: Classical vs. LLMBased ACO Over 50 Steps Figure 3 illustrates the CPU and GPU usage during the 50 time steps for both implementations. As before, the classical model exhibited brief, sharp spikes in CPU usage, while the LLM-based version maintained a smoother and more consistent CPU and memory usage over a longer duration. Fig. 3: CPU and GPU Usage Over Time for ACO Implementations # VI. DISCUSSION This section discusses the broader implications of using LLMs in swarm systems, drawing on the results from both Boids and ACO examples. These representative algorithms are used to analyze the strengths, limitations, and practical implications of replacing traditional rule-based approaches with LLM-driven behavior. # A. Fundamental Differences Between Traditional and LLMBased Swarms Traditional swarm algorithms are rule-based and typically rely on clearly defined mathematical models to drive agent behavior. In contrast, LLM-based swarms use natural language prompts to guide decisions, introducing a layer of abstraction that replaces precise coding with human-readable instructions. While this abstraction provides flexibility and simplicity, it comes at a computational cost. # B. Case Study: The Boids Model In both classical and LLM implementations of the Boids model, agents followed three core rules—separation, cohesion, and alignment. However, in the LLM-based version, each rule was executed using a prompt, requiring serialization of inputs, inference processing, and response parsing for each boid. This increased latency significantly. For example, if each of the three rules was executed as a separate prompt for every boid, the overhead grew quickly with swarm size. As a result, scalability—a fundamental characteristic of swarm systems—was not feasible. Despite this, the LLM approach allowed for high-level, abstract instructions like “adjust the position to avoid crowding,” eliminating the need for explicit mathematical formulations. While this abstraction occasionally led to inaccuracies—such as one of the three rules being misapplied—it also demonstrated the potential for more intuitive programming models. # C. Case Study: The ACO Model Similarly, the ACO implementation using LLMs followed the standard steps: path selection, pheromone update, and evaporation. Here too, the LLM-based method lagged in execution speed due to prompt processing. However, after 50 iterations, the LLM-driven system produced a more optimized path distribution: by the 50th iteration, the short path had accumulated a higher probability of being chosen in future iterations as compared to the classic ACO implementation. This suggests that the reasoning capabilities of LLMs can, in some cases, outperform strict algorithmic adherence. Unlike the logic-driven framework of traditional ACO, LLMbased ACO introduces heuristics grounded in reasoning to approximate solutions. While this can be advantageous, it also increases the risk of misinterpretation if the prompt is ambiguous or overly abstract. # D. Limitations and Trade-offs While improved prompt design strategies may yield more efficient interactions, the primary limitation of LLM-based swarm systems lies in latency and resource demand. Prompt processing involves input/output serialization, context encoding, and model inference—all contributing to computational overhead. These delays render real-time systems or highfrequency decision loops impractical. However, the primary advantage of LLM-based swarms lies in their flexibility and intuitive abstraction. Instead of specifying how to calculate position updates or pheromone decay, the behavior can be described in natural language. This lowers the barrier for implementing complex behaviors, especially in rapid prototyping and human-in-the-loop systems. # E. Implications and Future Directions The experiments with Boids and ACO highlight a central theme: LLM-based swarms trade execution speed for flexibility and higher-level reasoning. While they are not yet suitable for real-time autonomous systems at scale, they show promise for hybrid systems, where LLMs handle strategiclevel reasoning and classical algorithms manage low-level control. Future research should explore techniques for compressing prompt logic, reducing inference costs, and using smaller distilled models fine-tuned for swarm behavior. Combining both paradigms—rule-based precision and language-driven flexibility—may lead to a new generation of adaptive, explainable swarm systems.
Swarm intelligence traditionally refers to systems of simple, decentralized agents whose local interactions lead to emergent, collective behavior. Recently, the term 'swarm' has been extended to describe AI systems like OpenAI's Swarm, where large language models (LLMs) act as collaborative agents. This paper contrasts traditional swarm algorithms with LLM-driven swarms exploring how decentralization, scalability, and emergence are redefined in modern artificial intelligence (AI). We implement and compare both paradigms using Boids and Ant Colony Optimization (ACO), evaluating latency, resource usage, and behavioral accuracy. The suitability of both cloud-based and local LLMs is assessed for the agent-based use in swarms. Although LLMs offer powerful reasoning and abstraction capabilities, they introduce new constraints in computation and coordination that challenge traditional notions of swarm design. This study highlights the opportunities and limitations of integrating LLMs into swarm systems and discusses the evolving definition of 'swarm' in modern AI research.
[ "cs.AI" ]
# I. INTRODUCTION Smartphones and scanners are used to capture images of documents containing tables with vital information. Manual extraction is time-consuming. However, a shortage of annotated datasets has made developing effective deep learning models for table detection challenging. Data augmentation has been recognized as a critical technique in deep learning research, with surveys highlighting its effectiveness in addressing training data limitations and enhancing model generalization capabilities [1], [2]. While these studies explore numerous augmentation strategies across various domains, tabular text data in document images presents unique challenges not fully addressed in existing literature. Our synthetic data generation approach draws inspiration from generative modeling principles, similar to those in our work on process mining [3], but is tailored for the structural and visual complexities of tabular document images. To address this challenge, we propose creating a specialized dataset for table detection. Our method automates the annotation process, reducing manual effort and ensuring a diverse dataset for robust model training. By employing deep learning, we aim to develop a system capable of autonomously identifying and extracting text from tabular regions in document images. We explore an end-to-end deep learning model that integrates table detection and structure identification, streamlining the process and improving efficiency. This model treats table detection and structure identification as interconnected tasks, enhancing accuracy while reducing computational demands, making it suitable for real-world applications with resource constraints. Our approach improves table detection by combining a newly generated dataset with a resolution-aware TableNet baseline. The resulting models facilitate downstream tasks such as large-scale document processing and information retrieval. Contributions: The main contributions of this paper are: an automated LaTeX pipeline that renders page-level synthetic documents with diverse table styles and aligned ground-truth masks; a public corpus and code base that enable reproducible training and evaluation; an extensive comparison of TableNet at two input resolutions $2 5 6 \times 2 5 6$ vs. $1 0 2 4 \times 1 0 2 4$ ) to analyze performance trade-offs; As for the structure of this paper, Section II introduces the technologies used, Section III surveys prior work, Section IV details the dataset and evaluation protocol, Section V describes the implementation, Section VI presents the results and their discussion, and Section VII concludes the paper and outlines future directions. # II. BACKGROUND This section presents the background of the used networks, VGG-19 and TableNet. # A. VGG-19 VGG-19 [4] is a 19-layer network that stacks $3 \times 3$ convolutions and $2 \times 2$ max-pooling operations. Despite its age, its simple design and publicly available ImageNet weights make VGG-19 a convenient backbone for feature extraction; we therefore adopt it as the shared encoder in our TableNet baselines. # B. Image Segmentation Semantic segmentation [5] assigns a semantic label to every pixel instead of producing coarse bounding boxes. Stateof-the-art methods are fully convolutional and benefit from encoder weights pre-trained on large classification corpora, enabling accurate dense predictions even when task-specific training data are limited. # C. TableNet Earlier methods in deep learning treated table and column detection as separate issues. However, searching for columns independently often led to false positives due to their vertical alignment, complicating accurate detection. Leveraging knowledge of the tabular region improved column detection by using convolution filters to detect both tables and columns. TableNet [6], utilizing an encoder-decoder model for semantic segmentation, capitalized on this by employing the same encoder for both tasks and separate decoders for each. The architecture incorporates pre-trained VGG-19 features, shared encoder layers, and specific decoder branches for table and column segmentation. Each branch uses convolutional layers and transposed layers for feature map dimension reduction and pixel class prediction. # III. RELATED WORK Various studies and surveys in table comprehension [7]– [10] have explored table identification and data extraction, often reporting results separately [11]. Prior to deep learning, table identification relied on heuristics or metadata. Early efforts, such as TINTIN [12] and the work by Cesarini et al. [13], involved structural information and machine learning techniques for table identification, using MXY trees and hierarchical representations with Tabfinder. Other approaches, like that of T. Kasar et al. [14], focused on detecting intersecting lines and employing SVM classifiers, but were limited by visible guidelines. Probabilistic models, such as those by Silva et al. [15], emphasized combining multiple approaches using joint probability distribution and hidden row states. While table detection received significant attention, table structure identification garnered less focus. Notable works include the T-RECS method by Kieninger and Dengel [16], Wang et al.’s seven-step process for table structure understanding [17], and Shigarov et al.’s system that incorporated extensive configuration options and PDF metadata [18]. Recent advancements have integrated deep learning into table detection and structure recognition, as seen in methods by Hao et al. [19] and Tran et al. [20], both achieving competitive performance. DeepDeSRT [21], [22] and works by Aswin et al. [23] and Singh et al. [24] have leveraged deep learning and object detection techniques to improve table detection and structure recognition, showing advancements in performance and data extraction from different document formats. Recent large-scale synthetic corpora such as SynthTabNet [25], PubTables-1M [26], and DocBank [27] have accelerated research on table understanding. A common trait of these resources is that released images typically contain an isolated table crop rather than the entire document page. While this facilitates fast training for cell-structure recognition, it provides limited context for joint layout analysis. Our automatically rendered dataset, in contrast, embeds tables within two-column scientific pages so models must reason about neighbouring text, captions, and figure regions in addition to the table area itself. Alongside new datasets, recent architectures have improved performance. Transformer-based detectors such as TableTransformer [28] and encoder–decoder approaches like TableFormer [29] achieve state-of-the-art results on PubTabNet, whereas two-stage detectors such as Cascade Mask RCNN [30] remain strong contenders in dense-layout benchmarks. Rather than benchmarking every available method, we deliberately focus on a representative encoder–decoder baseline (TableNet) to isolate the effect of data augmentation and input resolution; integrating complementary detectors is left for subsequent studies. # IV. METHODOLOGY This section presents the dataset, our generation approach, and the evaluation methods. # A. Used Datasets: Marmot The Marmot dataset [31] is a large-scale collection of document images commonly used for table detection. It serves as a tool for assessing table detection algorithms in realworld scenarios. The dataset maintains a near 1:1 positiveto-negative ratio, covering various document types such as research papers and forms. Its authenticity underscores its relevance for training models on real-world challenges. The Marmot dataset suffers from labour-intensive manual annotations that are time-consuming and costly. # B. The Proposed Generation Approach: Novel Training Data We propose an approach to automatically generate a wide range of training data. Our evaluation comprises both synthetic-to-synthetic and synthetic-to-real testing scenarios: models are validated on the held-out portion of our generated corpus and, independently, on the external Marmot benchmark. This dual protocol enables us to disentangle domain-specific effects from purely architectural factors. The proposed approach takes as input 4 parameters: number of rows, number of columns, datatypes, and table style. These are saved as ground-truth for the evaluation. Employing LaTeX and Lorem Ipsum in a Python script, the output is an image used in training table detection models. The generated tables fall into five different styles (Fig. 1). Each autogenerated table is populated with statistically valid dummy data matching column datatypes: numerical columns contain Gaussian-distributed values $\mathbf { \Phi } _ { \mathcal { \hat { \mu } } } = \mathbf { \Phi } _ { 0 }$ , $\sigma = 1 \rangle$ , text columns use Markov chain-generated Lorem Ipsum, and date fields follow a uniform distribution across 2000–2023. The generated images are of two types: a table surrounded by text, or only the table by itself on the page (Fig. 2 (a) and (b)). # C. Bias Mitigation To address synthetic data limitations, we implement: Style randomization: $1 5 \%$ probability of applying nonWestern table features (right-to-left text, vertical headers) Header1 Header2 Header3 Header4 Header1Header2Header3 Header4 R 面 翻理 Header1 Header2 Header3 Header4 Header1 Header2 Header3 Header4 12.02 xihhe 59.46 92 Header1 Header2 Header3 Header4 (a) (b) tadaEs Figure 1. Styles of Generated Tables: dashed lines, without vertical borders, colored headers, devoid of top/mid/bottom rules, and booktabs. Figure 2. Types of Generated Image: (a) with text; (b) without text. Figure 3. Example Result after Performing Bitwise XOR. • Noise injection: Scanning artifacts simulated through Poisson noise $\lambda = 0 . 1 \AA$ and random perspective transforms Color variation: $10 \%$ of tables use light blue/red backgrounds instead of standard white # D. Evaluation Methods 1) Metric: Bitwise XOR: The primary evaluation metric is the pixel-wise XOR error rate. Alternative region metrics such as Intersection–over–Union (IoU), F1 score, or mean Average Precision (mAP) are widespread in object detection; however, they summarize performance at the bounding-box level and are less sensitive to the thin borders that delineate table cells. Preliminary experiments showed that small border shifts can leave IoU nearly unchanged while noticeably increasing visual error. We therefore focus on XOR, which counts every misclassified pixel and is thus more discriminative for table boundaries. Let $P$ be the binary prediction mask produced by the network and $G$ the corresponding ground-truth mask; both have the same spatial resolution $H \times W$ . $$ \mathrm { X O R \ E r r o r \ R a t e } = { \frac { | P \oplus G | } { H \times W } } $$ # V. IMPLEMENTATION # A. TableNet Module The implementation of TableNet [6] is used in this work. TableNet is a TensorFlow-based model [32] for table and column segmentation in grayscale images, trained on the Marmot dataset. To ensure the uniform input size of $2 5 6 \mathrm { x } 2 5 6$ pixels for images and 1024x1024 pixels for masks, the images are preprocessed through resizing and normalization. TableNet’s architecture utilizes a pre-trained VGG-19 base for feature extraction from RGB images of resolutions $1 0 2 4 \mathrm { x } 1 0 2 4$ or $2 5 6 \mathrm { x } 2 5 6$ . For each corpus—both the synthetic images introduced in this work and the original Marmot pages—we randomly split the data, using $90 \%$ of the images for training and reserving the remaining $10 \%$ for testing. Two decoder paths, for table and column segmentation, are incorporated with convolutional and upsampling layers, along with concatenation of feature maps from VGG-19. Training minimizes segmentation error using Sparse Categorical Crossentropy with L2 regularization $\lambda = 0 . 0 0 1 \lambda$ ) to prevent overfitting. We employed gradient clipping (thresh$\mathrm { \Gamma _ { o l d = 1 . 0 } }$ ) to stabilize training. Adam optimizer with a learning rate of 0.0001 and epsilon value of $1 e ^ { - 0 8 }$ is used for weight adjustment. Training progress is monitored with DisplayCallback for visualization, ModelCheckpoint, and EarlyStopping for model saving and early stopping based on validation loss. In summary, this module implements TableNet with TensorFlow, utilizing TensorFlow functions for preprocessing, VGG-19 base for feature extraction, and Adam optimizer for training. Callbacks enable monitoring of training progress for optimal segmentation results. # B. Code and Data Availability Our complete implementation and synthetic dataset are publicly available at our code repository [33]. The repository contains all data generation scripts, preprocessing utilities, trained models, and evaluation code used in this work. The dataset includes samples of all table styles discussed in this paper. # C. Predict Module This module implements a TensorFlow prediction pipeline for table and column mask prediction on PNG images. PNG format is chosen over JPEG and BMP due to its lossless compression, preserving image quality without introducing artifacts. The pipeline utilizes a custom-trained TableNet model to segment tables and columns. By providing a sample folder path and image dimensions as inputs, users can utilize the pipeline for predicting masks and visualizing results. This endto-end solution enables segmentation of tables and columns in image data, facilitating tasks like document processing and data extraction. Figure 4. Sample Image from Marmot Dataset and Predicted Mask with $2 5 6 \times 2 5 6$ Model Figure 5. Sample Image from Marmot Dataset and Predicted Mask with $1 0 2 4 \times 1 0 2 4$ Model # VI. RESULTS AND DISCUSSION The epoch ranges were determined through progressive validation: although the $2 5 6 \times 2 5 6$ models could be trained with a larger batch size (256 instead of 64 for $1 0 2 4 \times 1 0 2 4 ,$ ) thanks to their lower memory footprint, they still required more training epochs to converge. We attribute this to the reduced spatial detail available at lower resolutions, which slows down learning. Conversely, the higher-resolution models benefited from early stopping to prevent overfitting. This section presents an analysis of the predicted images generated by the TableNet model at two resolutions: $2 5 6 \times 2 5 6$ and $1 0 2 4 \times 1 0 2 4$ . The goal is to evaluate the accuracy and effectiveness of these two models in table detection. Both models were trained for several sets of epochs to avoid overfitting. The performance of the $2 5 6 \times 2 5 6$ model was evaluated at several epoch intervals. The lowest XOR error rate on the Marmot dataset was $9 . 1 8 \%$ at 500 epochs. On our synthetic dataset, the error rate continued to decrease with additional training, reaching $4 . 0 4 \%$ at 1540 epochs, suggesting the model benefits from extended training on synthetic data (Table I). A sample result is shown in Fig. 4. Figure 6. Sample Image from Novel Dataset and Predicted Mask with $1 0 2 4 \times 1 0 2 4$ Model Increasing the input resolution to $1 0 2 4 \times 1 0 2 4$ impacted performance. The model achieved its best result on the Marmot dataset at 150 epochs with a $1 3 . 8 3 \%$ error rate, while performance degraded at higher epochs. On our synthetic dataset, the $1 0 2 4 \times 1 0 2 4$ model achieved its lowest error rate of $4 . 3 3 \%$ at 760 epochs (Table I). Sample predictions are shown in Fig. 5 and Fig. 6. Table I COMPARATIVE ANALYSIS OF XOR ERROR RATES $( \% )$ ) ACROSS MODEL CONFIGURATIONS AND DATASETS. THE LOWEST ERROR FOR EACH MODEL RESOLUTION AND DATASET COMBINATION IS SHOWN IN BOLD. The results reveal a mixed influence of input resolution. On the synthetic corpus, the lower $2 5 6 \times 2 5 6$ resolution yields a slightly better result $( 4 . 0 4 ~ \% )$ than the $1 0 2 4 \times 1 0 2 4$ model $( 4 . 3 3 ~ \% )$ ). In contrast, the Marmot benchmark performs better at the lower resolution, with the best $2 5 6 \times 2 5 6$ model reaching $9 . 1 8 ~ \%$ versus $1 3 . 8 3 \%$ for its $1 0 2 4 \times 1 0 2 4$ counterpart. Hence, higher resolution does not universally improve performance on either synthetic or real-world data in our experiments. The findings also indicate that optimal training epochs vary between datasets and resolutions. While the synthetic dataset benefits from extended training (up to 1540 epochs for $2 5 6 \times 2 5 6 )$ , the Marmot dataset shows signs of overfitting beyond 500 epochs for lower-resolution models. The TableNet model, trained with a $2 5 6 \mathrm { x } 2 5 6$ input size, exhibits limited accuracy in predicting table regions. In some instances, it fails to detect table regions entirely. Reducing the training epochs improves detection, albeit with noisy borders and white space in the predicted masks. Conversely, the 1024x1024 model delivers crisper predictions on the synthetic corpus—reflected in its lowest $4 . 3 3 \ \%$ XOR error—yet it underperforms on the Marmot benchmark when compared with the $2 5 6 \mathrm { x } 2 5 6$ counterpart. Post-processing (noise removal and small-object filtering) further refines the high-resolution masks but does not overturn this overall trend. These mixed results highlight that higher input resolution alone is not a universal solution and that domain differences between synthetic and real data play a decisive role. Bitwise XOR error rate emerges as a crucial metric, quantifying pixel-wise disparities between ground truth and predicted masks, thus assessing misclassified pixels.
Document pages captured by smartphones or scanners often contain tables, yet manual extraction is slow and error-prone. We introduce an automated LaTeX-based pipeline that synthesizes realistic two-column pages with visually diverse table layouts and aligned ground-truth masks. The generated corpus augments the real-world Marmot benchmark and enables a systematic resolution study of TableNet. Training TableNet on our synthetic data achieves a pixel-wise XOR error of 4.04% on our synthetic test set with a 256x256 input resolution, and 4.33% with 1024x1024. The best performance on the Marmot benchmark is 9.18% (at 256x256), while cutting manual annotation effort through automation.
[ "cs.CV", "cs.AI" ]
# 1 Introduction Concurrent accesses to databases are typically grouped in transactions which define units of work that should be isolated from other concurrent computations and resilient to failures. Modern databases provide different levels of isolation for transactions with different trade-offs between consistency and throughput. The strongest isolation level, Serializability [21], provides the illusion that transactions are executed atomically one after another in a serial order. Serializability incurs a high cost in throughput. For performance, databases provide weaker isolation levels, e.g., Snapshot Isolation [6] or Read Committed [6]. The concurrency control protocols used in large-scale databases to implement isolation levels are difficult to build and test. For instance, the black-box testing framework Jepsen [19] found a remarkably large number of subtle problems in many production databases. In this work, we focus on testing the isolation level implementations in databases, and more precisely, on the problem of checking whether a given execution adheres to the prescribed isolation level semantics. Inspired by scenarios that arise in commercial software [22], we consider a quite generic version of the problem where transactions are formed of SQL queries and multiple isolation levels are used at the same time, i.e., each transaction is assigned a possibly different isolation level (the survey in [22] found that $3 2 \%$ of the respondents use such “heterogeneous” configurations). Previous work [21,7] studied the complexity of the problem when transactions are formed of reads and writes on a static set of keys (variables), and all transactions have the same isolation level. As a first contribution, we introduce a formal semantics for executions with SQL transactions and a range of isolation levels, including serializability, snapshot isolation, prefix consistency, and read committed. Dealing with SQL queries is more challenging than classic reads and writes of a static set of keys (as assumed in previous formalizations [11,7]). SQL insert and delete queries change the set of locations at runtime and the set of locations returned by an SQL query depends on their values (the values are restricted to satisfy WHERE clauses). We define an abstract model for executions, called history, where every SQL query that inspects the database (has a WHERE clause) is associated with a set of SQL queries that wrote the inspected values. This relation is called a write-read relation (also known as read-from). This is similar to associating reads to writes in defining memory models. We consider two classes of histories depending on the “completeness” of the write-read relation. To define a formal semantics of isolation levels, we need a complete write-read relation in the sense that for instance, an SQL select is associated with a write for every possible row (identified by its primary key) in the database, even if that row is not returned by the select because it does not satisfy the WHERE clause. Not returning a row is an observable effect that needs to be justified by the semantics. Such full histories can not be constructed by interacting with the database in a black-box manner (a desirable condition in testing) when only the outputs returned by queries can be observed. Therefore, we introduce the class of client histories where the write-read concerns only rows that are returned by a query. The consistency of a client history is defined as the existence of an extension of the write-read to a full history which satisfies the semantics. The semantics on full histories combines axioms from previous work [7] in a way that is directed by SQL queries that inspect the database and the isolation level of the transaction they belong to. This axiomatic semantics is validated by showing that it is satisfied by a standard operational semantics inspired by real implementations. We study the complexity of checking if a full or client history is consistent, it satisfies the prescribed isolation levels. This problem is more complex for client histories, which record less dependencies and need to be extended to full ones. For full histories, we show that the complexity of consistency checking matches previous results in the reads and writes model when all transactions have the same isolation level [7]: polynomial time for the so-called saturable isolation levels, and NP-complete for stronger levels like Snapshot Isolation or Serializability. The former is a new result that generalizes the work of [7] and exposes the key ideas for achieving polynomial-time complexity, while the latter is a consequence of the previous results. We show that consistency checking becomes NP-complete for client histories even for saturable isolation levels. It remains NP-complete regardless of the expressiveness of WHERE clauses (for this stronger result we define another class of histories called partial-observation). The problem is NP-complete even if we bound the number of sessions. In general, transactions are organized in sessions [23], an abstraction of the sequence of transactions performed during the execution of an application (the counterpart of threads in shared memory). This case is interesting because it is polynomial-time in the read/write model [7]. As a counterpart to these negative results, we introduce an algorithm for checking consistency of client histories which is exponential-time in the worst case, but polynomial time in relevant cases. Given a client history as input, this algorithm combines an enumeration of extensions towards a full history with a search for a total commit order that satisfies the required axioms. The commit order represents the order in which transactions are committed in the database and it is an essential artifact for defining isolation levels. For efficiency, the algorithm uses a non-trivial enumeration of extensions that are not necessarily full but contain enough information to validate consistency. The search for a commit order is a non-trivial generalization of an algorithm by Biswas et al. [7] which concerned only serializability. This generalization applies to all practical isolation levels and combinations thereof. We evaluate an implementation of this algorithm on histories generated by PostgreSQL with a number of applications from BenchBase [12], e.g., the TPC-C model of a store and a model of Twitter. This evaluation shows that the algorithm is quite efficient in practice and scales well to typical workloads used in testing databases. To summarize, we provide the first results concerning the complexity of checking the correctness of mixed isolation level implementations for SQL transactions. We introduce a formal specification for such implementations, and a first tool that can be used in testing their correctness. # 2 Histories # 2.1 Transactions We model the database as a set of rows from an unbounded domain Rows. Each row is associated to a unique (primary) key from a domain Keys, given by the function $k e y : R o w s K e y s$ . We consider client programs accessing the database from a number of parallel sessions, each session being a sequence of transactions defined by the following grammar: $\iota \in \mathsf { I s o } a \in \mathsf { L V a r s } \quad \mathsf { R } \in \mathsf { 2 } ^ { \mathsf { R o w s } } \quad \mathsf { p } \in \mathsf { R o w s } \to \{ 0 , 1 \} \quad \mathsf { U } \in \mathsf { K e y s } \to \mathsf { R o w s }$ Transaction : $: = { \tt b e g i n } ( \iota )$ ; Body; commit Body : $: = { }$ Instr | Instr; Body Instr : $: = 1$ InstrDB | a := LExpr | if(LCond){Instr} InstrDB ::= a := SELECT(p) | INSERT(R) | DELETE(p) | UPDATE(p, U) | abort Each transaction is delimited by begin and commit instructions. The begin instruction defines an isolation level $\iota$ for the current transaction. The set of isolation levels Iso we consider in this work will be defined later. The body contains standard SQL-like statements for accessing the database and standard assignments and conditionals for local computation. Local computation uses (transaction-)local variables from a set LVars. We use $a$ , $b$ , . . . to denote local variables. Expressions and Boolean conditions over local variables are denoted with LExpr and LCond, respectively. Concerning database accesses (sometimes called queries), we consider a simplified but representative subset of SQL: SELECT(p) returns the set of rows satisfying the predicate $\mathsf { p }$ and the result is stored in a local variable $a$ . INSERT(R) inserts the set of rows R or updates them in case they already exist (this corresponds to INSERT ON CONFLICT DO UPDATE in PostgreSQL) , and DELETE(p) deletes all the rows that satisfy p. Then, UPDATE(p, U) updates the rows satisfying p with values given by the map U, i.e., every row r in the database that satisfies $\mathsf { p }$ is replaced with $\mathsf { U } ( \mathsf { k e y } ( \mathsf { r } ) )$ , and abort aborts the current transaction. The predicate $\mathsf { p }$ corresponds to a WHERE clause in standard SQL. # 2.2 Histories We define a model of the interaction between a program and a database called history which abstracts away the local computation in the program and the internal behavior of the database. A history is a set of events representing the database accesses in the execution grouped by transaction, along with some relations between these events which explain the output of SELECT instructions. An event is a tuple $\langle e , t y p e \rangle$ where $e$ is an identifier and type is one of begin, commit, abort, SELECT, INSERT, DELETE and UPDATE. $\mathcal { E }$ denotes the set of events. For an event $e$ of type SELECT, DELETE, or UPDATE, we use WHERE $( e )$ to denote the predicate $\mathsf { p }$ and for an UPDATE event $e$ , we use $\mathtt { S E T } ( e )$ to denote the map U. We call read events the SELECT events that read the database to return a set of rows, and the DELETE and UPDATE events that read the database checking satisfaction of some predicate p. Similarly, we call write events the INSERT, DELETE and UPDATE events that modify the database. We also say that an event is of type end if it is either a commit or an abort event. A transaction log $( t , \iota _ { t } , E , \mathsf { p o } _ { t } )$ is an identifier $t$ , an isolation level identifier $\iota _ { t }$ , and a finite set of events $E$ along with a strict total order ${ \mathsf { p o } } _ { t }$ on $E$ , called program order (representing the order between instructions in the body of a transaction). The set $E$ of events in a transaction log $t$ is denoted by events $( t )$ . For simplicity, we may use the term transaction instead of transaction log. Isolation levels differ in the values returned by read events which are not preceded by a write on the same variable in the same transaction. We denote by reads $\mathbf { \rho } ( t )$ the set of read events contained in $t$ . Also, if $t$ does not contain an abort event, the set of write events in $t$ is denoted by writes $\mathit { \Omega } ( t )$ . If $t$ contains an abort event, then we define writes $\mathit { \Omega } ( t )$ to be empty. This is because the effect of aborted transactions (its set of writes) should not be visible to other transactions. The extension to sets of transaction logs is defined as usual. To simplify the exposition we assume that for any given key $x \in { \mathsf { K e y s } }$ , a transaction does not modify (insert/delete/update) a row with key $x$ more than once. Otherwise, under all isolation levels, only the last among multiple updates is observable in other transactions. As expected, we assume that the minimal element of ${ \mathsf { p o } } _ { t }$ is a begin event, if a commit or an abort event occurs, then it is maximal in ${ \mathsf { p o } } _ { t }$ , and a log cannot contain both commit and abort. A transaction log without commit or abort is called pending. Otherwise, it is complete. A complete transaction log with a commit is committed and aborted otherwise. A history contains a set of transaction logs (with distinct identifiers) ordered by a (partial) session order so that represents the order between transactions in the same session. It also includes a write-read relation wr which associates write events with read events. The write events associated to a read implicitly define the values observed (returned) by the read (read events do not include explicit values). Let $T$ be a set of transaction logs. For every key $x \in { \mathsf { K e y s } }$ we consider a write-read relation $\mathsf { w r } _ { x } \subseteq \mathsf { w r i t e s } ( T ) \times \mathsf { r e a d s } ( T )$ . The union of $\mathsf { w r } _ { x }$ for every $x \in { \mathsf { K e y s } }$ is denoted by wr. We extend the relations wr and ${ \mathsf { w r } } _ { x }$ to pairs of transactions by $( t _ { 1 } , t _ { 2 } ) \in \mathsf { w r }$ , resp., $( t _ { 1 } , t _ { 2 } ) \in \mathsf { w r } _ { x }$ , iff there exist events $w$ in $t _ { 1 }$ and $r$ in $t _ { 2 } , t _ { 2 } \neq t _ { 1 }$ s.t. $( w , r ) \in \mathsf { w r }$ , resp., $( w , r ) \in \mathsf { w r } _ { x }$ . Analogously, we extend wr and $\mathsf { w r } _ { x }$ to tuples formed of a transaction (containing a write) and a read event. We say that the transaction $t _ { 1 }$ is read by the transaction $t _ { 2 }$ when $( t _ { 1 } , t _ { 2 } ) \in$ wr. The inverse of $\mathsf { w r } _ { x }$ is defined as usual and denoted by $\mathsf { w r } _ { x } ^ { - 1 }$ . We assume that $\mathsf { w r } _ { x } ^ { - 1 }$ is a partial function and thus, use ${ \mathsf { w r } } _ { x } ^ { - 1 } ( e )$ to denote the write event $w$ such that $( w , e ) \in \mathsf { w r } _ { x }$ . We also use ${ \mathsf { w r } } _ { x } ^ { - 1 } ( e ) \downarrow$ and ${ \mathsf { w r } } _ { x } ^ { - 1 } ( e ) \uparrow$ to say that there exists a write $w$ such that $( w , e ) \in \mathsf { w r } _ { x }$ (resp. such write $w$ does not exist). To simplify the exposition, every history includes a distinguished transaction init preceding all the other transactions in so and inserting a row for every $x$ . It represents the initial state and it is the only transaction that may insert as value $\dagger _ { x }$ (indicating that initially, no row with key $x$ is present). Definition 1. $A$ history $( T , \mathsf { s o } , \mathsf { w r } )$ is a set of transaction logs $T$ along with a strict partial session order so, and $a$ write-read relation $\mathsf { w r } _ { x } \subseteq$ writes $( T ) \times$ reads $( T )$ for each $x \in \mathsf { K e y s } \ s . t$ . – the inverse of $\mathsf { w r } _ { x }$ is a partial function, – so wr is acyclic (here we use the extension of wr to pairs of transactions), – $- \mathrm { \Delta } i f \left( w , r \right) \in \mathsf { w r } _ { x }$ , then $\mathtt { v a l u e } _ { \mathsf { w r } } ( w , x ) \neq \bot$ , where $$ \begin{array} { r } { \mathrm { v a l u e } _ { \mathrm { w r } } ( w , x ) = \left\{ \begin{array} { l l } { \mathrm { r } \qquad i f w = \mathrm { I N S E R T } ( \mathsf { R } ) \wedge \mathsf { r } \in \mathsf { R } \wedge \mathrm { k e y } ( \mathsf { r } ) = x } \\ { \dagger _ { x } \quad \mathit { i f } w = \mathrm { D E L E T E } ( \mathsf { p } ) \wedge \mathrm { w r } _ { x } ^ { - 1 } ( w ) \Big \} } \\ { \qquad \wedge \mathsf { p } ( \mathsf { v a l u e } _ { \mathrm { w r } } ( \mathsf { w r } _ { x } ^ { - 1 } ( w ) , x ) ) = 1 } \\ { \mathsf { U } ( x ) \ i f w = \mathrm { U P D A T E } ( \mathsf { p } , \mathsf { U } ) \wedge \mathrm { w r } _ { x } ^ { - 1 } ( w ) \Big \downarrow } \\ { \qquad \wedge \mathsf { p } ( \mathsf { v a l u e } _ { \mathrm { w r } } ( \mathsf { w r } _ { x } ^ { - 1 } ( w ) , x ) ) = 1 } \\ { \bot \quad \mathit { o t h e r w i s e } } \end{array} \right. } \end{array} $$ The function $\mathsf { w r } _ { x } ^ { - 1 }$ may be partial because some query may not read a key $x$ , e.g., if the corresponding row does not satisfy the query predicate. The function $\mathtt { v a l u e } _ { \mathsf { w r } } ( w , x )$ returns the row with key $x$ written by the write event $w$ . If $w$ is an INSERT, it returns the inserted row with key $x$ . If $w$ is an Fig. 1: An example of a history (isolation levels omitted for legibility). Arrows represent so and wr relations. Transaction init defines the initial state: row 0 with key $x _ { 1 }$ and row 1 with key $x _ { 2 }$ . Transaction $t _ { 2 }$ reads $x _ { 1 }$ and $x _ { 2 }$ from init and deletes row with key $x _ { 1 }$ (the only row satisfying predicate $\lambda r : r \leq 0$ corresponds to key $x _ { 1 }$ ). Transaction $t _ { 1 }$ reads $x _ { 1 }$ from $t _ { 2 }$ and $x _ { 2 }$ from init, and updates only row with key $x _ { 2 }$ as this is the only row satisfying predicate $\lambda r : r \geq 1$ . UPDATE(p, U) event, it returns the value of $\mathsf { U }$ on key $x$ if $w$ reads a value for key $x$ that satisfies predicate p. If $w$ is a DELETE $( p )$ , it returns the special value $\dagger _ { x }$ if $w$ reads a value for key $x$ that satisfies p. This special value indicates that the database does not contain a row with key $x$ . In case no condition is satisfied, $\mathtt { v a l u e } _ { \mathsf { w r } } ( w , x )$ returns an undefined value $\perp$ . We assume that the special values $\dagger _ { x }$ or $\perp$ do not satisfy any predicate. Note that the recursion in the definition of $\mathtt { v a l u e } _ { \mathsf { w r } } ( w , x )$ terminates because wr is an acyclic relation. Figure 1 shows an example of a history. For the UPDATE event $w$ in $t _ { 1 }$ , $\mathtt { v a l u e } _ { \mathrm { w r } } ( w , x _ { 1 } ) = \perp$ because this event reads $x _ { 1 }$ from the DELETE event in $t _ { 2 }$ ; while $\mathtt { v a l u e } _ { \mathrm { w r } } ( w , x _ { 2 } ) = - 2$ as it reads $x _ { 2 }$ from the INSERT event in init. The set of transaction logs $T$ in a history $h = ( T , { \sf s o } , { \sf w r } )$ is denoted by $\mathsf { t r } ( h )$ and events $( h )$ is the union of events $\mathit { \Omega } ( t )$ for every $t \in T$ . For a history $h$ and an event $e$ in $h$ , $\mathsf { t r } ( e )$ is the transaction $t$ in $h$ that contains $e$ . We assume that each event belongs to only one transaction. Also, writes $\textstyle ( h ) = \bigcup _ { t \in { \mathsf { t r } } ( h ) }$ writes $\mathit { \Omega } ( t )$ and r $\begin{array} { r } { \mathsf { \mathsf { a d s } } ( h ) = \bigcup _ { t \in \mathsf { t r } ( h ) } \mathsf { r e a d s } ( t ) } \end{array}$ . We extend so to pairs of events by $( e _ { 1 } , e _ { 2 } ) \in \mathrm { { s o } }$ if $( { \sf t r } ( e _ { 1 } ) , { \sf t r } ( e _ { 2 } ) ) \in \sf s o$ . Also, $\textstyle { \mathrm { p o } } = \bigcup _ { t \in T } { \mathrm { p o } } _ { t }$ . We use $h , h _ { 1 } , h _ { 2 } ,$ . . . to range over histories. For a history $h$ , we say that an event $r$ reads $x$ in $h$ whenever $\mathsf { w r } _ { x } ^ { - 1 } ( r ) \downarrow$ . Also, we say that an event $w$ writes $x$ in $h$ , denoted by $w$ writes $x$ , whenever $\mathtt { v a l u e } _ { \mathsf { w r } } ( w , x ) \neq \perp$ and the transaction of $w$ is not aborted. We extend the function value to transactions: value $\mathsf { \Gamma } _ { \mathsf { w r } } ( t , x )$ equals $\mathtt { v a l u e } _ { \mathsf { w r } } ( w , x )$ , where $w$ is the maximal event in ${ \mathsf { p o } } _ { t }$ that writes $x$ . # 2.3 Classes of histories We define two classes of histories: (1) full histories which are required to define the semantics of isolation levels and (2) client histories which model what is observable from interacting with a database as a black-box. Full histories model the fact that every read query “inspects” an entire snapshot of the database in order to for instance, select rows satisfying some predicate. Roughly, full histories contain a write-read dependency for every read and key. There is an exception which concerns “local” reads. If a transaction modifies a row with key $x$ and then reads the same row, then it must always return the value written in the transaction. This holds under all isolation levels. In such a case, there would be no write-read dependency because these dependencies model interference across different transactions. We say that a read $r$ reads a key $x$ locally if it is preceded in the same transaction by a write $w$ that writes $x$ . Fig. 2: Examples of a client history $h$ and two possible extensions. The dashed edge belongs only to the extensions. The first extension is not a witness of $h$ as $t _ { 1 }$ writes $- 2$ on $x _ { 2 }$ and WHERE $( t _ { 2 } ) ( - 2 ) = 1$ . Definition 2. A full history $( T , { \mathsf { s o } } , { \mathsf { w r } } )$ is a history where wr $_ x ^ { - 1 } ( r )$ is defined for all $x$ and $r$ , unless $r$ reads $x$ locally. Client histories record less write-read dependencies compared to full histories, which is formalized by the extends relation. Definition 3. A history $\overline { { h } } = ( T , { \sf s o } , \overline { { \sf w r } } )$ extends another history $h = ( T , { \sf s o } , { \sf w r } )$ if wr $\subseteq { \overline { { \mathsf { w r } } } }$ . We denote it by $h \subseteq { \overline { { h } } }$ . Definition 4. $A$ client history $h = ( T , \mathsf { s o } , \mathsf { w r } )$ is a history s.t. there is a full history $\overline { { h } } = ( T , { \sf s o } , \overline { { \sf w r } } )$ with $h \subseteq { \overline { { h } } }$ , and s.t for every $x$ , if $( w , r ) \in \overline { { \mathsf { w r } } } _ { x } \setminus \mathsf { w r } _ { x }$ then WHERE $\mathsf { \cdot } ( r ) ( \mathsf { v a l u e } _ { \mathsf { w r } } ( w , x ) ) = 0$ . The history $h ^ { \prime }$ is called $a$ witness of $h$ . Compared to a witness full history, a client history may omit write-read dependencies if the written values do not satisfy the predicate of the read query. These values would not be observable when interacting with the database as a black-box. This includes the case when the write is a DELETE (recall that the special value $\dagger _ { x }$ indicating deleted rows falsifies every predicate by convention). Figure 1 shows a full history as every query reads both $x _ { 1 }$ and $x _ { 2 }$ . Figure 2a shows a client history: transactions $t _ { 1 } , t _ { 2 }$ does not read $x _ { 2 }$ and $x _ { 1 }$ resp. Figure 2b is an extension but not a witness while Figure 2c is indeed a witness of it. # 3 Axiomatic Semantics With Different Isolation Levels We define an axiomatic semantics on histories where transactions can be assigned different isolation levels, which builds on the work of Biswas et al. [7]. # 3.1 Executions An execution of a program is represented using a history with a set of transactions $T$ along with a total order $\mathsf { c o } \subseteq T \times T$ called commit order. Intuitively, the commit order represents the order in which transactions are committed in the database. Definition 5. An execution $\xi = \left( h , \mathsf { c o } \right)$ is a history $h = ( T , { \sf s o } , { \sf w r } )$ along with $a$ commit order $\mathsf { c o } \subseteq T \times T$ , such that transactions in the same session or that are read are necessarily committed in the same order: so $\cup$ wr $\subseteq { \mathsf { c o } }$ . $\xi$ is called an execution of $h$ . For a transaction $t$ , we use $t \in \xi$ to denote the fact that $t \in T$ . Analogously, for an event $e$ , we use $e \in \xi$ to denote that $e \in t$ and $t \in \xi$ . The extension of a commit order to pairs of events or pairs of transactions and events is done in the obvious way. # 3.2 Isolation Levels Isolation levels enforce restrictions on the commit order in an execution that depend on the session order so and the write-read relation wr. An isolation level $\iota$ for a transaction $t$ is a set of constraints called axioms. Intuitively, an axiom states that a read event $r \in t$ reads key $x$ from transaction $t _ { 1 }$ if $t _ { 1 }$ is the latest transaction that writes $x$ which is “visible” to $r$ – latest refers to the commit order co. Formally, an axiom $a$ is a predicate of the following form: $a ( r ) : = \forall x , t _ { 1 } , t _ { 2 } . t _ { 1 } \neq t _ { 2 } \land ( t _ { 1 } , r ) \in \mathsf { w r } _ { x } \land t _ { 2 } \mathrm { ~ w r i t e s ~ } x \land \mathsf { v i s } _ { a } ( t _ { 2 } , r , x ) \Rightarrow ( t _ { 2 } , t _ { 1 } ) \in \mathsf { c o n s t } _ { x } .$ (1) where $r$ is a read event from $t$ . The visibility relation of $a \ \mathsf { v i s } _ { a }$ is described by a formula of the form: $$ \mathsf { v i s } _ { a } ( \tau _ { 0 } , \tau _ { k + 1 } , x ) : \exists \tau _ { 1 } , \dotsc , \tau _ { k } \cdot \bigwedge _ { i = 1 } ^ { k + 1 } ( \tau _ { i - 1 } , \tau _ { i } ) \in \mathsf { R e l } _ { i } \land \mathsf { W r C o n s } _ { a } ( \tau _ { 0 } , \dotsc , \tau _ { k + 1 } , x ) $$ with each $\mathsf { R e l } _ { i }$ is defined by the grammar: $$ \mathsf { R e l } : : = \mathsf { p o } | \mathsf { s o } | \mathsf { w r } | \mathsf { c o } | \mathsf { R e l } \cup \mathsf { R e l } | \mathsf { R e l } ; \mathsf { R e l } | \mathsf { R e l } ^ { + } | \mathsf { R e l } ^ { * } $$ This formula states that $\tau _ { 0 }$ (which is $t _ { 2 }$ in Eq.1) is connected to $\tau _ { k + 1 }$ (which is $r$ in Eq.1) by a path of dependencies that go through some intermediate transactions or events $\tau _ { 1 } , \ldots , \tau _ { k }$ . Every relation used in such a path is described based on po, so, wr and $\mathtt { C O }$ using union $\cup$ , composition of relations ;, and transitive closure operators. Finally, extra requirements on the intermediate transactions s.t. writing a different key $y \neq x$ are encapsulated in the predicate $\mathsf { W r C o n s } _ { a } ( \tau _ { 0 } , \ldots , \tau _ { k } , x )$ . Each axiom $a$ uses a specific visibility relation denoted by vis . vis $( \iota )$ denotes $a$ the set of visibility relations used in axioms defining an isolation level $\iota$ . Figure 3 shows two axioms which correspond to their homonymous isolation levels [7]: Read Committed (RC) and Serializability (SER). SER states that $t _ { 2 }$ is visible to $r$ if $t _ { 2 }$ commits before $r$ , while RC states that $t _ { 2 }$ is visible to $r$ if either $( t _ { 2 } , r ) \in \mathsf { s o }$ or if there exists a previous event $\boldsymbol { r } ^ { \prime }$ in $\mathsf { t r } ( r )$ that reads $x$ from $t _ { 2 }$ . Similarly, Read Atomic (RA) and Prefix Consistency (PC) are defined using their homonymous axioms while Snapshot Isolation (SI) is defined as a conjunction of both Prefix and Conflict. Fig. 3: Axioms defining RC, RA, SER, PC and SI isolations levels respectively. Visibility relations are “inlined” to match the definitions in [7]. The isolation configuration of a history is a mapping $\mathsf { i s o } ( h ) : T \to \mathsf { I s o }$ associating to each transaction an isolation level identifier from a set Iso. Whenever every transaction in a history has the same isolation level $\iota$ , the isolation configuration of that history is denoted simply by $\iota$ . Note that SER is stronger than RC: every transaction visible to a read $r$ according to RC is also visible to $r$ according to SER. This means SER imposes more constraints for transaction $t _ { 1 }$ to be read by $r$ than RC. In general, for two isolation configurations $I _ { 1 }$ and $I _ { 2 }$ , $I _ { 1 }$ is stronger than $I _ { 2 }$ when for every transaction $t$ , $I _ { 1 } ( t )$ is stronger than $I _ { 2 } ( t )$ (i.e., whenever $I _ { 1 } ( t )$ holds in an execution $\xi$ , $I _ { 2 } ( t )$ also holds in $\xi$ ). The weaker than relationship is defined similarly. Given a history $h$ with isolation configuration $\mathfrak { i } \mathfrak { s o } ( h )$ , $h$ is called consistent when there exists an execution $\xi$ of $h$ such that for all transactions $t$ in $\xi$ , the axioms in $\mathfrak { i } \mathfrak { s o } ( h ) ( t )$ are satisfied in $\xi$ (the interpretation of an axiom over an execution is defined as expected). For example, let $h$ be the full history in Figure 2c. If both $t _ { 1 } , t _ { 2 }$ ’s isolation are SER, then $h$ is not consistent, i.e., every execution $\xi = ( h , \mathsf { c o } )$ violates the corresponding axioms. Assume for instance, that $( t _ { 1 } , t _ { 2 } ) \in \mathsf { c o }$ . Then, by axiom SER, as $( \mathrm { i n i t } , t _ { 2 } ) \in \mathsf { w r } _ { x _ { 1 } }$ and $t _ { 1 }$ writes $x _ { 1 }$ , we get that $( t _ { 1 } , \mathrm { i n i t } ) \in \mathsf { c o }$ , which is impossible as $( { \mathrm { i n i t } } , t _ { 1 } ) \in { \mathsf { s o } } \subseteq { \mathsf { c o } }$ However, if the isolation configuration is weaker (for example $\mathsf { i } \mathsf { s o } ( h ) ( t _ { 2 } ) = \mathsf { R C }$ ), then the history is consistent using init <co $t _ { 1 } < _ { \mathrm { c o } } t _ { 2 }$ as commit order. Definition 6. A full history $h = ( T , { \sf s o } , { \sf w r } )$ with isolation configuration $\mathfrak { i s o } ( h )$ is consistent iff there is an execution $\xi$ of $h$ s.t. $\textstyle \bigwedge _ { t \in T , r \in r \in \mathsf { a d s } ( t ) , a \in \mathsf { i s o } ( h ) ( t ) } a ( r )$ holds in $\xi ; \xi$ is called $a$ consistent execution of $h$ . The notion of consistency on full histories is extended to client histories. Definition 7. A client history $h = ( T , { \mathsf { s o } } , { \mathsf { w r } } )$ with isolation configuration $\mathfrak { i } \mathfrak { s o } ( h )$ is consistent iff there is a full history $\overline { { h } }$ with the same isolation configuration which is a witness of h and consistent; $\overline { { h } }$ is called $a$ consistent witness of $h$ . In general, the witness of a client history may not be consistent. In particular, there may exist several witnesses but no consistent witness. # 3.3 Validation of the semantics To justify the axiomatic semantics defined above, we define an operational semantics inspired by real implementations and prove that every run of a program can be translated into a consistent history. Every instruction is associated with an increasing timestamp and it reads from a snapshot of the database defined according to the isolation level of the enclosing transaction. At the end of the transaction we evaluate if the transaction can be committed or not. We assume that a transaction can abort only if explicitly stated in the program. We model an optimistic approach where if a transaction cannot commit, the run blocks (modelling unexpected aborts). We focus on three of the most used isolation levels: SER, SI, RC. Other isolation levels can be handled in a similar manner. For each run $\rho$ we extract a full history history $( \rho )$ . We show by induction that history $( \rho )$ is consistent at every step. The formal description of the semantics and its correctness can be found in Appendix A. Theorem 1. For every run $\rho$ , history $( \rho )$ is consistent. # 4 Complexity of Checking Consistency # 4.1 Saturation and Boundedness We investigate the complexity of checking if a history is consistent. Our axiomatic framework characterize isolation levels as a conjunction of axioms as in Equation (1). However, some isolation levels impose stronger constraints than others. For studying the complexity of checking consistency, we classify them in two categories, saturable or not. An isolation level is saturable if its visibility relations are defined without using the co relation (i.e. the grammar in Equation (3) omits the co relation). Otherwise, we say that the isolation level is non-saturable. For example, RC and RA are saturable while PC, SI and SER are not. Algorithm 1 Extending an initial pco relation with necessary ordering constraints Definition 8. An isolation configuration $\mathfrak { i } \mathfrak { s o } ( h )$ is saturable if for every transaction $t$ , $\mathfrak { i } \mathfrak { s o } ( h ) ( t )$ is a saturable isolation level. Otherwise, $\mathfrak { i } \mathfrak { s o } ( h )$ is non-saturable. We say an isolation configuration $\mathfrak { i } \mathfrak { s o } ( h )$ is bounded if there exists a fixed $k \in \mathbb N$ s.t. for every transaction $t$ , $\mathfrak { i } \mathfrak { s o } ( h ) ( t )$ is defined as a conjunction of at most $k$ axioms that contain at most $k$ quantifiers. For example, SER employs one axiom and four quantifiers while SI employs two axioms, Prefix and Conflict, with four and five quantifiers respectively. Any isolation configuration composed with SER, SI, PC, RA and RC isolation levels is bounded. We assume in the following that isolation configurations are bounded. Checking consistency requires computing the $\mathtt { v a l u e } _ { \mathsf { w r } }$ function and thus, evaluating WHERE predicates. In the following, we assume that evaluating WHERE predicates on a single row requires constant time. # 4.2 Checking Consistency of Full Histories Algorithm 2 computes necessary and sufficient conditions for the existence of a consistent execution $\xi = ( h , \mathsf { c o } )$ for a history $h$ with a saturable isolation configuration. It calls saturate, defined in Algorithm 1, to compute a “partial” commit order relation pco that includes $( \mathsf { s o U } \mathsf { w r } ) ^ { + }$ and any other dependency between transactions that can be deduced from the isolation configuration. A consistent execution exists iff this partial commit order is acyclic. Algorithm 2 generalizes the results in [7] for full histories with heterogeneous saturable isolation configurations. The correctness and complexity analysis of Algorithms 1 and 2 can be found in Appendix B.1. Theorem 2. Checking consistency of full histories with bounded saturable isolation configurations can be done in polynomial time. For bounded non-saturable isolation configurations, checking if a history is consistent is NP-complete as an immediate consequence of the results in [7]. These previous results apply to the particular case of transactions having the same isolation level and being formed of classic read and write instructions on a fixed set of variables. The latter can be simulated by SQL queries using WHERE predicates for selecting rows based on their key being equal to some particular value. For instance, SELECT $\lambda r : \mathsf { k e y } ( r ) = x$ ) simulates a read of a “variable” $x$ . # 4.3 Checking Consistency of Client Histories We show that going from full histories to client histories, the consistency checking problem becomes NP-complete, independently of the isolation configurations. Intuitively, NP-hardness comes from keys that are not included in outputs of SQL queries. The justification for the consistency of omitting such rows can be ambiguous, e.g., multiple values written to a row may not satisfy the predicate of the WHERE clause, or multiple deletes can justify the absence of a row. The width of a history $\mathtt { w i d t h } ( h )$ is the maximum number of transactions which are pairwise incomparable w.r.t. so. In a different context, previous work [7] showed that bounding the width of a history (consider it to be a constant) is a sufficient condition for obtaining polynomial-time consistency checking algorithms. This is not true for client histories. Theorem 3. Checking consistency of bounded-width client histories with bounded isolation configuration stronger than RC and width(h) 3 is NPcomplete. The proof of NP-hardness uses a reduction from 1-in-3 SAT which is inspired by the work of Gibbons and Korach [16] (Theorem 2.7) concerning sequential consistency for shared memory implementations. Our reduction is a non-trivial extension because it has to deal with any weak isolation configuration stronger than RC. A detailed proof of the result can be found in Appendix B.2. The proof of Theorem 3 relies on using non-trivial predicates in WHERE clauses. We also prove that checking consistency of client histories is NP-complete irrespectively of the complexity of these predicates. This result uses another class of histories, called partial-observation histories. These histories are a particular class of client histories where events read all inserted keys, irrespectively of their WHERE clauses (as if these clauses where true). Definition 9. $A$ partial observation history $h = ( T , { \mathsf { s o } } , { \mathsf { w r } } )$ is a client history for which there is a witness $\overline { { h } } = ( T , { \sf s o } , \overline { { \sf w r } } )$ of $h$ , s.t. for every $x$ , if $( w , r ) \in \overline { { \mathsf { w r } } } _ { x } \setminus \mathsf { w r } _ { x }$ , then w deletes $x$ . Theorem 4. Checking consistency of partial observation histories with bounded isolation configurations stronger than RC is NP-complete. The proof of NP-hardness (see Appendix B.3) uses a novel reduction from 3 SAT. The main difficulty for obtaining consistent witnesses of partial observation histories is the ambiguity of which delete event is responsible for each absent row. Algorithm 3 Checking consistency of client histories # 5 Effectively Checking Consistency of Client Histories The result of Theorem 3 implicitly asks whether there exist conditions on the histories for which checking consistency remains polynomial as in [7]. We describe an algorithm for checking consistency of client histories and identify cases in which it runs in polynomial time. Consider a client history $h = ( T , { \sf s o } , { \sf w r } )$ which is consistent. For every consistent witness $\overline { { h } } \ = \ ( T , { \sf s o } , \overline { { { \sf w r } } } )$ of $h$ there exists a consistent execution of $\bar { h }$ , $\xi ~ = ~ ( \overline { { h } } , \subset { \mathrm { o } } )$ . The commit order $\mathbf { C O }$ contains $( \mathsf { s o U } \mathsf { w r } ) ^ { + }$ and any other ordering constraint derived from axioms by observing that $( 5 0 \cup \mathsf { w r } ) ^ { + } \subseteq \mathsf { c o }$ . More generally, co includes all constraints generated by the least fixpoint of the function saturate defined in Algorithm 1 when starting from $( \mathsf { s o U } \mathsf { w r } ) ^ { + }$ as partial commit order. This least fixpoint exists because saturate is monotonic. It is computed as usual by iterating saturate until the output does not change. We use FIX( $\lambda R : \operatorname { S A T U R A T E } ( h , R ) ) ( \operatorname { s o } \sqcup \mathsf { w r } ) ^ { + }$ to denote this least fixpoint. In general, such a fixpoint computation is just an under-approximation of co, and it is not enough for determining $h$ ’s consistency. The algorithm we propose, described in Algorithm 3, exploits the partial commit order pco obtained by such a fixpoint computation (line 2) for determining $h$ ’s consistency. For a read $r$ , key $x$ , we define $1 _ { x } ^ { r } { \big ( } \mathsf { p c o ) }$ , resp., $0 _ { x } ^ { r } ( \mathsf { p c o } )$ , to be the set of transactions that are not committed after $\mathsf { t r } ( r )$ and which write a value that satisfies, resp., does not satisfy, the predicate $\mathtt { W H E R E } ( r )$ . The formal description of both sets can be seen in Equation 4. $$ \begin{array} { r } { \begin{array} { r } { \mathtt { 1 } _ { x } ^ { r } ( \mathsf { p c o } ) = \{ t \in T \mid ( \mathsf { t r } ( r ) , t ) \notin \mathsf { p c o } \ \wedge \ \mathsf { W H E R E } ( r ) ( \mathsf { v a l u e } _ { \mathsf { w r } } ( t , x ) ) = 1 \} } \\ { \mathtt { O } _ { x } ^ { r } ( \mathsf { p c o } ) = \{ t \in T \mid ( \mathsf { t r } ( r ) , t ) \notin \mathsf { p c o } \ \wedge \ \mathsf { W H E R E } ( r ) ( \mathsf { v a l u e } _ { \mathsf { w r } } ( t , x ) ) = 0 \} } \end{array} } \end{array} $$ The set $0 _ { x } ^ { r } { \big ( } { \mathsf { p c o } } { \big ) }$ can be used to identify extensions that are not witness of a history. Let us consider the client history $h$ depicted in Figure 4a. Observe that $t _ { 3 }$ is not reading $x _ { 1 }$ and $t _ { 5 }$ is not reading $x _ { 2 }$ . Table 4b describes all possible full extensions $\bar { h }$ of $h$ . An execution $\xi = ( { \overline { { h } } } , \mathsf { c o } )$ is consistent if $( t , r ) \in \overline { { \mathsf { w r } } } _ { x } \backslash \mathsf { w r } _ { x }$ implies $\mathsf { W H E R E } ( r ) ( \mathsf { v a l u e } _ { \mathsf { w r } } ( t , x ) ) = 0$ . This implies that extensions $h _ { 1 }$ , $h _ { 4 }$ , and $h _ { 7 }$ , where (a) A history where $t _ { 3 } , t _ { 5 }$ have PC and SER as isolation levels respectively. The isolation levels of the other transactions are unspecified. (b) Table describing all possible full extensions of the history in Figure 4a. (c) Table describing the only conflictfree extension of Figure 4a. Fig. 4: Comparison between conflict-free extensions and full extensions of the history $h$ in Figure 4a. In $h$ , $\mathsf { w r } ^ { - 1 }$ is not defined for two pairs: $( t _ { 3 } , x _ { 1 } )$ and $( t _ { 5 } , x _ { 2 } )$ ; where we identify the single SELECT event in a transaction with its transaction. Table 4b describes all possible full extensions of $h$ . For example, the first extension, $h _ { 1 }$ , states that $( \mathrm { i n i t } , t _ { 3 } ) \in \mathsf { w r } _ { x _ { 1 } }$ and $( \mathrm { i n i t } , t _ { 5 } ) \in \mathsf { w r } _ { x _ { 2 } }$ . Algorithm 3 only explore the only extension $h _ { 2 5 8 }$ described in Table $_ \mathrm { 4 c }$ ; where $\mathsf { w r } _ { x _ { 1 } } ^ { - 1 } ( t _ { 3 } ) \uparrow$ and $( t _ { 1 } , t _ { 5 } ) \in \mathsf { w r } _ { x _ { 2 } }$ . The history $h _ { 2 5 8 }$ can be extended to histories $h _ { 2 }$ , $h _ { 5 }$ and $h _ { 8 }$ . $( \mathrm { i n i t } , t _ { 5 } ) \in \overline { { \mathsf { w r } } } _ { x _ { 2 } }$ , are not witnesses of $h$ as ${ \tt W H E R E } ( t _ { 5 } ) ( { \tt v a l u e } _ { \mathrm { w r } } ( \tt i n i t , \it x _ { 2 } ) ) = 1$ . We note that init $\not \in \ 0 _ { x _ { 2 } } ^ { t _ { 5 } } ( \mathsf { p c o } ) \ = \ \{ t _ { 1 } \}$ . Also, observe that $( t _ { 5 } , t _ { 3 } ) \ \in$ wr; so extensions $h _ { 3 } , h _ { 6 }$ and $h _ { 9 }$ , where $( t _ { 3 } , t _ { 5 } ) \in \overline { { \mathsf { w r } } } _ { x _ { 2 } }$ , are not a witness of $h$ . Once again, $t _ { 3 } \notin 0 _ { x _ { 2 } } ^ { t _ { 5 } } ( \mathsf { p c o } )$ . In general, for every read event $r$ and key $x$ s.t. $\mathsf { w r } _ { x } ^ { - 1 } ( r ) \uparrow$ , the extension of $h$ where $( t , r ) \in \overline { { \mathsf { w r } } } _ { x }$ , $t \not \in 0 _ { x } ^ { r } ( \mathsf { p c o } )$ , is not a witness of $h$ . In particular, if $\mathsf { w r } _ { x } ^ { - 1 } ( r ) \uparrow$ but $0 _ { x } ^ { r } ( \mathsf { p c o } ) = \emptyset$ , then no witness of $h$ can exist. The sets $0 _ { x } ^ { r } ( \mathsf { p c o } )$ are not sufficient to determine if a witness is a consistent witness as our previous example shows: $0 _ { x _ { 1 } } ^ { t _ { 3 } } ( \mathsf { p c o } ) = \{ \mathrm { i n i t } , t _ { 2 } , t _ { 5 } \}$ , but $h _ { 2 }$ is not consistent. Algorithm 3, combines an enumeration of history extensions with a search for a consistent execution of each extension. The extensions are not necessarily full. In case $\mathsf { w r } _ { x } ^ { - 1 } ( r )$ is undefined, we use sets $1 _ { x } ^ { r } { \big ( } \mathsf { p c o ) }$ to decide whether the extension of $h$ requires specifying $\mathsf { w r } _ { x } ^ { - 1 } ( r )$ for determining $h$ ’s consistency. Algorithm 3 specifies $\mathsf { w r } _ { x } ^ { - 1 } ( r )$ only if $( r , x )$ is a so-called conflict, i.e., $\mathsf { w r } _ { x } ^ { - 1 } ( r )$ is undefined and $1 _ { x } ^ { r } ( \mathsf { p c o } ) \neq \emptyset$ . Following the example of Figure 4, we observe that $1 _ { x _ { 1 } } ^ { t _ { 3 } } ( \mathsf { p c o } ) = \emptyset$ , all transactions that write on $x _ { 1 }$ write non-negative values; but instead $1 _ { x _ { 2 } } ^ { t _ { 5 } } ( \mathsf { p c o } ) = \{ \mathrm { i n i t } \}$ . Intuitively, this means that if some extension $h ^ { \prime }$ that does not specify ${ \mathsf { w r } } _ { x _ { 1 } } ^ { - 1 } ( t _ { 3 } )$ does not violate any axiom when using some commit order co, then we can extend $h ^ { \prime }$ , defining ${ \mathsf { w r } } _ { x _ { 1 } } ^ { - 1 } ( t _ { 3 } )$ as some adequate transaction, and obtain a full history $\bar { h }$ s.t. the execution $\xi = ( { \overline { { h } } } , \mathsf { c o } )$ is consistent. On the other hand, specifying the write-read dependency of $t _ { 5 }$ on $x _ { 2 }$ matters. For not contradicting any axiom using co, we may require $( \mathrm { i n i t } , t _ { 5 } ) \in \overline { { \mathsf { W } } } \mathsf { r } _ { x _ { 2 } }$ . However, such extension is not even a witness of $h$ as $\mathtt { W H E R E ( i n i t ) } ( \mathtt { v a l u e } _ { \mathrm { w r } } ( \mathtt { i n i t } , x _ { 2 } ) ) = 1$ . This intuition holds for the particular definitions of the isolation levels that Algorithm 3 considers. A history is conflict-free if it does not have conflicts. Our previous discussion reduces the problem of checking consistency of a history to checking consistency of its conflict-free extensions. For example, the history $h$ in Figure 4a is not conflict-free but the extension $h _ { 2 5 8 }$ defined in Table 4c is. Instead of checking consistency of the nine possible extensions, we only check consistency of $h _ { 2 5 8 }$ . Algorithm 3 starts by checking if there is at least a conflict-free extension of $h$ (line 6). If $h$ is conflict-free, it directly calls Algorithm 4 (line 7); while otherwise, it iterates over conflict-free extensions of $h$ , calling Algorithm 4 on each of them (line 11). Algorithm 4 describes the search for the commit order of a conflict-free history $h$ . This is a recursive enumeration of consistent prefixes of histories that backtracks when detecting inconsistency (it generalizes Algorithm 2 in [7]). A prefix of a history $h = ( T , { \sf s o } , { \sf w r } )$ is a tuple $P = ( T _ { P } , M _ { P } )$ where $T _ { P } \subseteq T$ is a set of transactions and $M _ { P } : \mathsf { K e y s } \to T _ { P }$ is a mapping s.t. (1) so predecessors of transactions in $T _ { P }$ are also in $T _ { P }$ , i.e., $\forall t \in T _ { P }$ . ${ \mathsf { s o } } ^ { - 1 } ( t ) \in T _ { P }$ and (2) for every $x$ , $M _ { P } ( x )$ is a so-maximal transaction in $T _ { P }$ that writes $x$ ( $M _ { P }$ records a last write for every key). For every prefix $P = ( T _ { P } , M _ { P } )$ of a history $h$ and a transaction $t \in T \backslash T _ { P }$ , we say a prefix $P ^ { \prime } = ( T _ { P ^ { \prime } } , M _ { P ^ { \prime } } )$ of $h$ is an extension of $P$ using $t$ if $T _ { P ^ { \prime } } = T _ { P } \cup \{ t \}$ and for every key $x$ , $M _ { P ^ { \prime } } ( x )$ is $t$ or $M _ { P } ( x )$ . Algorithm 4 extensions, denoted as $P \cup \{ t \}$ , guarantee that for every key $x$ , if $t$ writes $x$ , then $M _ { P ^ { \prime } } ( x ) = t$ . Extending the prefix $P$ using $t$ means that any transaction $t ^ { \prime } \in T _ { P }$ is committed before $t$ . Algorithm 4 focuses on special extensions that lead to commit orders of consistent executions. Table 1: Predicates relating prefixes and visibility relations where $\overline { { \mathsf { p c o } _ { t } ^ { P } } }$ is defined as $\mathsf { p c o U } \left\{ ( t ^ { \prime } , t ) \mid t ^ { \prime } \in T _ { P } \right\} \cup \left\{ ( t , t ^ { \prime \prime } ) \mid t ^ { \prime \prime } \in T \setminus ( T _ { P } \cup \{ t \} ) \right\}$ . Definition 10. Let h be a history, $P = ( T _ { P } , M _ { P } )$ be a prefix of $h$ , $t$ a transaction that is not in $T _ { P }$ and $P ^ { \prime } = ( T _ { P ^ { \prime } } , M _ { P ^ { \prime } } )$ be an exetension of $P$ using $t$ . The prefix $P ^ { \prime }$ is $a$ consistent extension of $P$ with $t$ , denoted by $P \triangleright P ^ { \prime }$ , if 1. $P$ is pco-closed: for every transaction $t ^ { \prime } \in T$ s.t. $( t ^ { \prime } , t ) \in \mathsf { p c o }$ then $t ^ { \prime } \in T _ { P }$ , $\mathcal { Q }$ . t does not overwrite other transactions in $P$ : for every read event $r$ outside of the prefix, i.e., ${ \sf t r } ( r ) \in T \setminus T _ { P ^ { \prime } }$ and every visibility relation $v \in$ $\mathsf { v i s } ( \mathsf { i s o } ( h ) ) ( \mathsf { t r } ( r ) )$ , the predicate $\mathsf { v p } _ { v } ^ { P } ( t , r )$ defined in Table 1 holds in $h$ . We say that a prefix is consistent if it is either the empty prefix or it is a consistent extension of a consistent prefix. (a) Conflict-free history corresponding to the extension $h _ { 2 5 8 }$ (Table 4c) of the history in Figure 4a (b) Execution of Algorithm 3 on the history in Figure 5a. Fig. 5: Applying Algorithm 4 on the conflict-free consistent history $h _ { 2 5 8 }$ on the left. The right part pictures a search for valid extensions of consistent prefixes on $h _ { 2 5 8 }$ . Prefixes are represented by their so-maximal transactions, e.g., $\left. t _ { 2 } \right.$ contains all transactions which are before $t _ { 2 }$ in so, i.e., $\{ \mathrm { i n i t } , t _ { 1 } , t _ { 2 } \}$ . A red arrow means that the search is blocked (the prefix at the target is not a consistent extension), while a blue arrow mean that the search continues. Figure 5b depicts the execution of Algorithm 4 on the conflict-free history Figure 5a (history $h _ { 2 5 8 }$ from Table 4c). Blocked and effectuated calls are represented by read and blue arrows respectively. The read arrow $a$ is due to condition 1 in Definition 10: as $t _ { 3 }$ enforces PC, reads $x _ { 4 }$ from $t _ { 2 }$ , and $t _ { 4 }$ is visible to it $( \mathsf { v i s } _ { \mathsf { P r e f i x } } ( t _ { 4 } , t _ { 3 } , x _ { 4 } ) )$ , $( t _ { 4 } , t _ { 2 } ) \in$ pco; so consistent prefixes can not contain $t _ { 2 }$ if they do not contain $t _ { 4 }$ . The read arrow $b$ is due to condition 2: as $t _ { 5 }$ enforces SER and it reads $x _ { 1 }$ from $t _ { 4 }$ , consistent prefixes can not contain $t _ { 2 }$ unless $t _ { 5 }$ is included. When reaching prefix $\langle t _ { 3 } , t _ { 5 } \rangle$ , the search terminates and deduces that $h$ is consistent. From the commit order induced by the search tree we can construct the extension of $h$ where missing write-read dependencies are obtained by applying the axioms on such a commit order. In our case, from i $\mathrm { n i t } ~ < _ { \mathrm { c o } } ~ t _ { 1 } ~ < _ { \mathrm { c o } } ~ t _ { 4 } ~ < _ { \mathrm { c o } } ~ t _ { 5 } ~ < _ { \mathrm { c o } } ~ t _ { 2 } ~ < _ { \mathrm { c o } } ~ t _ { 3 }$ , we deduce that the execution $\xi = ( h _ { 5 } , \mathtt { c o } )$ is a consistent execution of $h _ { 2 5 8 }$ , and hence of $h$ ; where $h _ { 5 }$ is the history described in Table 4b. For complexity optimizations (see Appendix B.5), Algorithm 4 requires an isolation level-dependent equivalence relation between consistent prefixes. If there is transaction $t \in T$ s.t. $\mathsf { i } \mathsf { s o } ( h ) ( t ) \ = \ \mathsf { S I }$ , prefixes $P ~ = ~ ( T _ { P } , M _ { P } )$ and $P ^ { \prime } = ( T _ { P ^ { \prime } } , M _ { P ^ { \prime } } )$ are equivalent iff they are equal (i.e. $T _ { P } = T _ { P ^ { \prime } } , M _ { P } = M _ { P ^ { \prime } } $ ). Otherwise, they are equivalent iff $T _ { P } = T _ { P ^ { \prime } }$ . The proof of Algorithm 3’s correctness can be found in Appendix B.4. Theorem 5. Let $h$ be a client history whose isolation configuration is defined using SER, SI, PC, RA, RC . Algorithm 3 returns true if and only if h is consistent. In general, Algorithm 3 is exponential the number of conflicts in $h$ . The number of conflicts is denoted by $\# \mathsf { c o n f } \left( h \right)$ . The number of conflicts exponent is implied by the number of mappings in $X _ { h }$ explored by Algorithm 3 ( $E _ { h }$ is the set of conflicts in $h$ ). The history width and size exponents comes from the number of prefixes explored by Algorithm 4 which is $| h | ^ { \mathsf { w i d t h } ( h ) } \cdot \mathsf { w i d t h } ( h ) ^ { | \mathsf { K e y s } | }$ in the worst case (prefixes can be equivalently described by a set of so-maximal transactions and a mapping associating keys to sessions). The full detail of Algorithm 3 complexity’s proof can be found in Appendix B.5. Theorem 6. For every client history $h$ whose isolation configuration is composed of {SER, SI, PC, RA, RC} isolation levels, Algorithm 3 runs in $\mathcal { O } ( | h | \# \mathrm { c o n f } ( h ) + \mathrm { w i d t h } ( h ) + 9 \cdot \mathrm { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } )$ . Moreover, if no transaction employs SI isolation level, Algorithm 3 runs in O(|h|#conf(h)+width(h)+8). On bounded, conflict-free histories only using SER, PC, RA, RC as isolation levels, Algorithm 3 runs in polynomial time. For instance, standard reads and writes can be simulated using INSERT and SELECT with WHERE clauses that select rows based on their key being equal to some particular value. In this case, histories are conflict-less (wr would be defined for the particular key asked by the clause, and writes on other keys would not satisfy the clause). A more general setting where WHERE clauses restrict only values that are immutable during the execution (e.g., primary keys) and deletes only affect non-read rows also falls in this category. # 6 Experimental evaluation We evaluate an implementation of checkConsistency in the context of the Benchbase [12] database benchmarking framework. We apply this algorithm on histories extracted from randomly generated client programs of a number of database-backed applications. We use PostgreSQL 14.10 as a database. The experiments were performed on an Apple M1 with 8 cores and 16 GB of RAM. Implementation. We extend the Benchbase framework with an additional package for generating histories and checking consistency. Applications from Benchbase are instrumented in order to be able to extract histories, the wr relation in particular. Our implementation is publicly available [5]. Our tool takes as input a configuration file specifying the name of the application and the isolation level of each transaction in that application. For computing the wr relation and generating client histories, we extend the database tables with an extra column WRITEID which is updated by every write instruction with a unique value. SQL queries are also modified to return whole rows 56B0enchmark Twitter with 10 transactionsSIEpRer session 6B0enchmark TPC-C with 10 transactionsSIEpRer session Be560nchmark TPC-C PC with 10 transactioSnIEsRper sessio 30 SIE+R+CRC 340 SIE+R+CRC 340 SIE+R+CRC 1 五 20 20 20 10 10 10 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 Sessions Sessions Sessions instead of selected columns. To extract the wr relation for UPDATE and DELETE we add RETURNING clauses. Complex operators such as INNER JOIN are substituted by simple juxtaposed SQL queries (similarly to [8]). We map the result of each query to local structures for generating the corresponding history. Transactions aborted by the database (and not explicitly by the application) are discarded. Benchmark. We analyze a set of benchmarks inspired by real-world applications and evaluate them under different types of clients and isolation configurations. We focus on isolation configurations implemented in PostgreSQL, i.e. compositions of SER, SI and RC isolation levels. In average, the ratio of SER/SI transactions is $1 1 \%$ for Twitter and 88% for TPC-C and TPC-C PC. These distributions are obtained via the random generation of client programs implemented in BenchBase. In general, we observe that the bottleneck is the number of possible history extensions enumerated at line 9 in Alg. 3 and not the isolation configuration. This number is influenced by the distribution of types of transactions, e.g., for TPC-C, a bigger number of transactions creating new orders increases the number of possible full history extensions. We will clarify. Twitter [12] models a social network that allows users to publish tweets and get their followers, tweets and tweets published by other followers. We consider five isolation configurations: SER, SI and RC and the heterogeneous $\mathrm { S E R + R C }$ and $\mathrm { S I } + \mathrm { R C }$ , where publishing a tweet is SER (resp., SI) and the rest are RC. The ratio of SER (resp. SI) transactions w.r.t. RC is $1 1 \%$ on average. TPC-C [24] models an online shopping application with five types of transactions: reading the stock, creating a new order, getting its status, paying it and delivering it. We consider five isolation configurations: the homogeneous SER, SI and RC and the combinations $\mathrm { S E R + R C }$ and $\mathrm { S I } + \mathrm { R C }$ , where creating a new order and paying it have SER (respectively SI) as isolation level while the rest have RC. The ratio of SER (resp. SI) transactions w.r.t. RC is 88% on average. TPC-C PC is a variant of the TPC-C benchmark whose histories are always conflict-free. DELETE queries are replaced by UPDATE with the aid of extra columns simulating the absence of a row. Queries whose WHERE clauses query mutable values are replaced by multiple simple instructions querying only immutable values such as unique ids and primary keys. Experimental Results. We designed two experiments to evaluate checkConsistency’s performance for different isolation configurations increasing the number of transactions per session (the number of sessions is fixed), the number of sessions (the number of transactions per session is fixed), resp. We use a timeout of 60 seconds per history. Fig. 7: Running time of Algorithm 3 increasing the number of transactions per session. We plot the average running time of 5 random clients of such size. The first experiment investigates the scalability of Algorithm 3 when increasing the number of sessions. For each benchmark and isolation configuration, we consider 5 histories of random clients (each history is for a different client) with an increasing number of sessions and 10 transactions per session (around 400 histories across all benchmarks). No timeouts appear with less than 4 sessions. Figure 6 shows the running time of the experiment. The second experiment investigates the scalability of Algorithm 3 when increasing the number of transactions. For each benchmark and isolation configuration, we consider 5 histories of random clients, each having 3 sessions and an increasing number of transactions per session (around 1900 histories across all benchmarks). Figure 7 shows its running time. The runtime similarities between isolation configurations containing SI versus those without it show that in practice, the bottleneck of Algorithm 3 is the number of possible history extensions enumerated at line 11 in Algorithm 3; i.e. the number of conflicts in a history. This number is influenced by the distribution of types of transactions, e.g., for TPC-C, a bigger number of transactions creating new orders increases the number of possible full history extensions. Other isolation levels not implemented by PostgreSQL, e.g., prefix consistency PC, are expected to produce similar results. Both experiments show that Algorithm 3 scales well for histories with a small number of writes (like Twitter) or conflicts (like TPC-C PC). In particular, Algorithm 3 is quite efficient for typical workloads needed to expose bugs in production databases which contain less than 10 transactions [7,20,18]. A third experiment compares Algorithm 3 with a baseline consisting in a naive approach where we enumerate witnesses and executions of such witnesses until consistency is determined. We consider Twitter and TPC-C as benchmarks and execute 5 histories of random clients, each having 3 sessions and an increasing number of transactions per session (around 100 histories across all benchmarks). We execute each client under RC and check the obtained histories for consistency with respect to SER. The naive approach either times out for $3 5 . 5 \%$ , resp., $9 5 . 5 \%$ of the histories of Twitter, resp., TPC-C, or finishes in 5s on average (max 25s). In comparison, Algorithm 3 has no timeouts for Twitter and times out for $5 . 5 \%$ of the TPC-C histories; finishing in 1.5s on average (max 12s). Averages are computed w.r.t. non-timeout instances. The total number of executed clients is around 100. Only one TPC-C history was detected as inconsistent, which shows that the naive approach does not timeout only in the worst-case (inconsistency is a worst-case because all extensions and commit orders must be proved to be invalid). A similar analysis on the TPC-C PC benchmark is omitted: TPC-C PC is a conflict-free variation of TPC-C with more operations per transaction. Thus, the rate of timeouts in the naive approach increases w.r.t. TPC-C, while the rate of timeouts using Algorithm 3 decreases. Comparisons with prior work [7,4,18,20] are not possible as they do not apply to SQL (see Section 7 for more details). This evaluation demonstrates that our algorithm scales well to practical testing workloads and that it outperforms brute-force search. # 7 Related work The formalization of database isolation levels has been considered in previous work. Adya [2] has proposed axiomatic specifications for isolation levels, which however do not concern more modern isolation levels like PC or SI and which are based on low-level modeling of database snapshots. We follow the more modern approach in [11,7] which however addresses the restricted case when transactions are formed of reads and writes on a static set of keys (variables) and not generic SQL queries, and all the transactions in a given execution have the same isolation level. Our axiomatic model builds on axioms defined by Biswas et al. [7] which are however applied on a new model of executions that is specific to SQL queries. The complexity of checking consistency w.r.t isolation levels has been studied in [21,7]. The work of Papadimitriou [21] shows that checking serializability is NP-complete while the work of Biswas et al. [7] provides results for the same isolation levels as in our work, but in the restricted case mentioned above. Checking consistency in a non-transactional case, shared-memory or distributed systems, has been investigated in a number of works, e.g., [9,16,13,10,17,14,1,15,3]. Transactions introduce additional challenges that make these results not applicable. Existing tools for checking consistency in the transactional case of distributed databases, e.g., [7,4,18,20] cannot handle SQL-like semantics, offering guarantees modulo their transformations to reads and writes on static sets of keys. Our results show that handling the SQL-like semantics is strictly more complex (NPhard in most cases). # References 1. Parosh Aziz Abdulla, Mohamed Faouzi Atig, Bengt Jonsson, and Tuan Phong Ngo. Optimal stateless model checking under the release-acquire semantics. Proc. ACM Program. Lang., 2(OOPSLA):135:1–135:29, 2018. doi:10.1145/3276505. 2. A. Adya. Weak consistency: A generalized theory and optimistic implementations for distributed transactions. Technical report, USA, 1999. 3. Pratyush Agarwal, Krishnendu Chatterjee, Shreya Pathak, Andreas Pavlogiannis, and Viktor Toman. Stateless model checking under a reads-value-from equivalence. In Alexandra Silva and K. Rustan M. Leino, editors, Computer Aided Verification - 33rd International Conference, CAV 2021, Virtual Event, July 20-23, 2021, Proceedings, Part I, volume 12759 of Lecture Notes in Computer Science, pages 341–366. Springer, 2021. doi:10.1007/978-3-030-81685-8\_16. 4. Peter Alvaro and Kyle Kingsbury. Elle: Inferring isolation anomalies from experimental observations. Proc. VLDB Endow., 14(3):268–280, 2020. URL: http:// www.vldb.org/pvldb/vol14/p268-alvaro.pdf, doi:10.5555/3430915.3442427. 5. anonymous authors. Benchbase-evaluation, October 2024. URL: omittedforanonymity. 6. Hal Berenson, Philip A. Bernstein, Jim Gray, Jim Melton, Elizabeth J. O’Neil, and Patrick E. O’Neil. A critique of ANSI SQL isolation levels. In Michael J. Carey and Donovan A. Schneider, editors, Proceedings of the 1995 ACM SIGMOD International Conference on Management of Data, San Jose, California, USA, May 22-25, 1995, pages 1–10. ACM Press, 1995. doi:10.1145/223784.223785. 7. Ranadeep Biswas and Constantin Enea. On the complexity of checking transactional consistency. Proc. ACM Program. Lang., 3(OOPSLA):165:1–165:28, 2019. doi:10.1145/3360591. 8. Ranadeep Biswas, Diptanshu Kakwani, Jyothi Vedurada, Constantin Enea, and Akash Lal. Monkeydb: effectively testing correctness under weak isolation levels. Proc. ACM Program. Lang., 5(OOPSLA):1–27, 2021. doi:10.1145/3485546. 9. Ahmed Bouajjani, Constantin Enea, Rachid Guerraoui, and Jad Hamza. On verifying causal consistency. In Giuseppe Castagna and Andrew D. Gordon, editors, Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017, pages 626–638. ACM, 2017. doi:10.1145/3009837.3009888. 10. Jason F. Cantin, Mikko H. Lipasti, and James E. Smith. The complexity of verifying memory coherence and consistency. IEEE Trans. Parallel Distributed Syst., 16(7):663–671, 2005. doi:10.1109/TPDS.2005.86. 11. Andrea Cerone, Giovanni Bernardi, and Alexey Gotsman. A framework for transactional consistency models with atomic visibility. In Luca Aceto and David de Frutos-Escrig, editors, 26th International Conference on Concurrency Theory, CONCUR 2015, Madrid, Spain, September 1.4, 2015, volume 42 of LIPIcs, pages 58–71. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2015. URL: https:// doi.org/10.4230/LIPIcs.CONCUR.2015.58, doi:10.4230/LIPICS.CONCUR.2015. 58. 12. Djellel Eddine Difallah, Andrew Pavlo, Carlo Curino, and Philippe Cudré- Mauroux. Oltp-bench: An extensible testbed for benchmarking relational databases. Proc. VLDB Endow., 7(4):277–288, 2013. URL: http://www.vldb. org/pvldb/vol7/p277-difallah.pdf, doi:10.14778/2732240.2732246. 13. Michael Emmi and Constantin Enea. Sound, complete, and tractable linearizability monitoring for concurrent collections. Proc. ACM Program. Lang., 2(POPL):25:1– 25:27, 2018. doi:10.1145/3158113. 14. Florian Furbach, Roland Meyer, Klaus Schneider, and Maximilian Senftleben. Memory-model-aware testing: A unified complexity analysis. ACM Trans. Embed. Comput. Syst., 14(4):63:1–63:25, 2015. doi:10.1145/2753761. 15. Phillip B. Gibbons and Ephraim Korach. On testing cache-coherent shared memories. In Lawrence Snyder and Charles E. Leiserson, editors, Proceedings of the 6th Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA ’94, Cape May, New Jersey, USA, June 27-29, 1994, pages 177–188. ACM, 1994. doi:10.1145/181014.181328. 16. Phillip B. Gibbons and Ephraim Korach. Testing shared memories. SIAM J. Comput., 26(4):1208–1244, 1997. doi:10.1137/S0097539794279614. 17. Alex Gontmakher, Sergey V. Polyakov, and Assaf Schuster. Complexity of verifying java shared memory execution. Parallel Process. Lett., 13(4):721–733, 2003. doi: 10.1142/S0129626403001628. 18. Kaile Huang, Si Liu, Zhenge Chen, Hengfeng Wei, David A. Basin, Haixiang Li, and Anqun Pan. Efficient black-box checking of snapshot isolation in databases. Proc. VLDB Endow., 16(6):1264–1276, 2023. URL: https://www.vldb.org/pvldb/ vol16/p1264-wei.pdf, doi:10.14778/3583140.3583145. 19. Jepsen. Distributed systems testing, 2020. https://jepsen.io/. 20. Si Liu, Long Gu, Hengfeng Wei, and David A. Basin. Plume: Efficient and complete black-box checking of weak isolation levels. Proc. ACM Program. Lang., 8(OOPSLA2):876–904, 2024. doi:10.1145/3689742. 21. Christos H. Papadimitriou. The serializability of concurrent database updates. J. ACM, 26(4):631–653, 1979. doi:10.1145/322154.322158. 22. Andrew Pavlo. What are we doing with our lives?: Nobody cares about our concurrency control research. In Semih Salihoglu, Wenchao Zhou, Rada Chirkova, Jun Yang, and Dan Suciu, editors, Proceedings of the 2017 ACM International Conference on Management of Data, SIGMOD Conference 2017, Chicago, IL, USA, May 14-19, 2017, page 3. ACM, 2017. doi:10.1145/3035918.3056096. 23. Douglas B. Terry, Alan J. Demers, Karin Petersen, Mike Spreitzer, Marvin Theimer, and Brent B. Welch. Session guarantees for weakly consistent replicated data. In Proceedings of the Third International Conference on Parallel and Distributed Information Systems (PDIS 94), Austin, Texas, USA, September 28- 30, 1994, pages 140–149. IEEE Computer Society, 1994. doi:10.1109/PDIS.1994. 331722. 24. TPC. Technical report, Transaction Processing Performance Council, February 2010. URL: http://www.tpc.org/tpc_documents_current_versions/pdf/ tpc-c_v5.11.0.pdf. # A An operational semantics for SQL-like distributed databases (Section 3.3). Fig. A.1: An operational semantics for transactional programs. Above, $\mathtt { l a s t } ( h , j )$ denotes the last transaction log in the session order ${ \mathsf { s o } } ( j )$ of $h$ while snapshot $\mathbf { \chi } _ { \iota }$ and readFrom denote the snapshot visible to an instruction and the writes it reads from, respectively. The validate $\mathbf { \Phi } _ { \iota }$ checks if a transaction can be committed. They are defined in Figure A.3. Formally, the operational semantics is defined as a transition relation $\Rightarrow$ between configurations. A configuration is a tuple containing the following: – history $h$ recording the instructions executed in the past, insert $\begin{array} { r l } & { { \begin{array} { l l l } { e { \mathrm { ~ f r e s h } } } & { t = 1 { \mathrm { a s t } } ( h , j ) } & { \iota = { \mathrm { i } } { \mathrm { s o } } ( h ) ( t ) } \\ { \tau = 1 + \operatorname* { m a x } \{ \mathsf { T } ( e ^ { \prime } ) \mid e ^ { \prime } \in { \mathrm { e v e n t s } } ( h ) \} } & { \mathsf { T } ^ { \prime } = \mathsf { T } [ e \to \tau ] } \end{array} } } \\ & { { \begin{array} { l l l } { \underline { { \delta } } = { \mathsf { s n a p s h o t } } _ { ( } h , \mathsf { S } , \mathsf { T } ^ { \prime } , e , \mathrm { I N S E R T } ) } & { h ^ { \prime } = h \oplus _ { j } \left( e , \mathrm { I N S E R T } ( \mathsf { R } ) \right) } \\ { h , \gamma , \mathsf { B } , \mathsf { l } , \mathsf { T } , \mathsf { S } , \mathsf { P } \Rightarrow h ^ { \prime } , \gamma , \mathsf { B } [ j \mapsto \mathsf { B } ] , \mathsf { 1 } , \mathsf { T } ^ { \prime } , \mathsf { S } [ e \mapsto \delta ] , \mathsf { P } } \end{array} } } \end{array}$ select $\begin{array} { r l } & { \begin{array} { r l } { e \operatorname { f r e s h } \quad t = 1 \mathtt { a s t } ( h , j ) \quad \iota = \mathrm { i s o } ( h ) ( t ) \quad \mathsf { B } ( j ) = a : = \mathrm { S E L E C T } ( \mathsf { p } ) ; \mathsf { B } } \\ { \tau = 1 + \operatorname* { m a x } \{ \mathsf { T } ( e ^ { \prime } ) \mid e ^ { \prime } \in \mathrm { e v e n t s } ( h ) \} \quad \mathsf { T } ^ { \prime } = \mathsf { T } [ e \to \tau ] } \end{array} } \\ & { \begin{array} { r l } { \delta = \mathsf { s n a p s h o t } _ { \iota } ( h , \mathsf { S } , \mathsf { T } ^ { \prime } , e , \mathsf { S E L E C T } ) \quad \mathsf { w } = \mathsf { r e a d F r o m } ( h , \mathsf { T } , t , \delta ) } \\ { h ^ { \prime } = ( h \oplus _ { j } \left( e , \mathsf { S E L E C T } ( \mathsf { p } ) \right) ) \quad \displaystyle \bigoplus _ { x \in \mathsf { K e y s } , \mathsf { w } [ x ] \ne \bot } } \end{array} } \\ & { \begin{array} { r l } { h , \mathsf { Y } , \mathsf { S } , \mathsf { P } \Rightarrow h ^ { \prime } , \mathsf { Y } [ ( j , a ) \mapsto \{ r \in \delta : \mathsf { p } ( r ) \} ] , \mathsf { B } [ j \mapsto \mathsf { B } ] , \mathsf { T } ^ { \prime } , \mathsf { S } [ e \mapsto \delta ] , \mathsf { P } } \end{array} } \end{array}$ update $e$ $\begin{array} { r l } & { \mathrm { ~ \displaystyle ~ \boldsymbol ~ { \hat { \mu } } ~ } = \mathbf { 1 } \mathbf { a s t } ( h , j ) \quad \iota = \mathbf { i } \mathbf { s o } ( h ) ( t ) \quad \mathbf { B } ( j ) = \mathbf { U P D A T E } ( \boldsymbol { \mathsf { p } } , \ \mathsf { U } ) ; \mathsf { B } } \\ & { \quad \tau = 1 + \operatorname* { m a x } \{ \mathbf { T } ( e ^ { \prime } ) \mid e ^ { \prime } \in \mathsf { e v e n t s } ( h ) \} \quad \mathbf { T } ^ { \prime } = \mathbf { T } [ e \to \tau ] } \\ & { \delta = \mathbf { s n a p s h o t } _ { \iota } ( h , \mathbf { S } , \mathbf { T } ^ { \prime } , e , \mathsf { U P D A T E } ) \quad \mathbf { w } = \mathsf { r e a d F r o m } ( h , \mathbf { T } , t , \delta ) } \\ & { \quad \quad h ^ { \prime } = ( h \oplus _ { j } ( e , \mathsf { U P D A T E } ( \boldsymbol { \mathsf { p } } , \ \mathsf { U } ) ) ) \quad \displaystyle et { } { \textstyle } { \textstyle \bigoplus } \quad \operatorname { w r } ( \mathbf { w } [ x ] , e ) } \\ & { \quad \quad \quad h , \gamma , \mathbf { B } , \mathsf { I } , \mathsf { T } , \pmb { \mathsf { S } } , \mathsf { P } \Rightarrow h ^ { \prime } , \pmb { \gamma } , \mathbf { B } [ j \mapsto \mathsf { B } ] , \mathsf { I } , \mathsf { T } ^ { \prime } , \mathsf { S } [ e \mapsto \delta ] , \mathsf { P } } \end{array}$ delete $\begin{array} { r l } & { \mathrm { ~ \it ~ \ / ~ e ~ } \mathrm { ~ \it ~ \ / ~ c ~ } \mathrm { ~ \it ~ \ / ~ e ~ } \mathrm { ~ I ~ a ~ s t ( \it ~ h , j ) ~ } \mathrm { ~ \it ~ \ / ~ } \mathrm { ~ \it ~ \ / ~ i ~ = ~ i ~ s o ( h ) ( t ) ~ } \mathrm { ~ \bf ~ B ( j ) ~ } = \mathrm { D E L E T E } ( \mathrm { p } ) ; \mathrm { ~ \bf ~ B ~ } } \\ & { \mathrm { ~ \it ~ \ / ~ T = 1 + m a x \{ \bar { T } ( e ^ { \prime } ) ~ | ~ e ^ { \prime } \in \mathrm { e v e n t s } ( h ) \} ~ } \bar { \bf \cal T } ^ { \prime } = \bar { \bf T } [ \mathrm { e } \tau ] } \\ & { \delta = \mathrm { s n a p s h o t } _ { \mathrm { i } } ( h , { \bf S } , { \bf T } ^ { \prime } , e , \mathrm { D E L E T E } ) \mathrm { ~ \bf ~ w = r e a d F r o m } ( h , { \bf T } , t , \delta ) } \\ & { \mathrm { ~ \it ~ \ / ~ h ^ { \prime } = ( h \oplus _ { j } \odot ( e , \mathrm { D E L E T E } ( \mathrm { p } ) ) ) \sum _ { \mathrm { ~ \ i ~ \in ~ \bar { \it ~ h e p s } , ~ w r ( w [ \it ~ x ] , e ) ~ } } ~ } } \\ & { \mathrm { ~ \it ~ \ / ~ h , \gamma , B , 1 , T , S , P \Rightarrow h ^ { \prime } , \gamma , B [ j \mapsto \mathrm { B } ] , 1 , \bar { \bf T } ^ { \prime } , { \bf S } [ \mathrm { e } \mapsto \delta ] , \mathrm { P ~ \it ~ \ / ~ } } } \end{array}$ a valuation map $\gamma$ that records local variable values in the current transaction of each session ( $\gamma$ associates identifiers of sessions that have live transactions with valuations of local variables), a map $\mathbf { B }$ that stores the code of each live transaction (associating session identifiers with code), – a map I that tracks the isolation level of each executed transaction, – a map ${ \sf T }$ that associates events in the history with unique timestamps, $-$ a map S that associates events in the history with snapshots of the database, – sessions/transactions $\mathsf { P }$ that remain to be executed from the original program. For readability, we define a program as a partial function P : SessId ⇀ Sess that associates session identifiers in SessId with sequences of transactions as defined in Section 2.1. Similarly, the session order so in a history is defined as a partial function $\mathsf { s o } : \mathsf { S e s s l d } \mathsf { T l o g s } ^ { * }$ that associates session identifiers with sequences of transaction logs. Two transaction logs are ordered by so if one occurs before the other in some sequence ${ \mathsf { s o } } ( j )$ with $j \in \mathsf { S e s s l d }$ . Before presenting the definition of $\Rightarrow _ { I }$ , we introduce some notation. Let $h$ be a history that contains a representation of so as above. We use $h \oplus _ { j } ( t , \iota _ { t } , E , \mathsf { p o } _ { t } )$ to denote a history where $( t , \iota _ { t } , E , \mathsf { p o } _ { t } )$ is appended to ${ \mathsf { s o } } ( j )$ . Also, for an event $e$ , $\boldsymbol { h } \oplus _ { j } \boldsymbol { e }$ is the history obtained from $h$ by adding $e$ to the last transaction log in ${ \mathsf { s o } } ( j )$ and as a last event in the program order of this log (i.e., if ${ \mathsf { s o } } ( j ) =$ $\sigma ; ( t , \iota _ { t } , E , \mathsf { p o } _ { t } )$ , then the session order ${ \sf s o } ^ { \prime }$ of $\boldsymbol { h } \oplus _ { j } \boldsymbol { e }$ is defined by $\mathsf { s o } ^ { \prime } ( k ) = \mathsf { s o } ( k )$ for all $k \neq j$ and $\mathsf { s o } ( j ) = \sigma ; ( t , \iota _ { t } , E \cup e , \mathsf { p o } \cup \{ ( e ^ { \prime } , e ) : e ^ { \prime } \in E \} )$ ). Finally, for a history $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ , $h \oplus { \mathsf { w r } } ( t , e )$ is the history obtained from $h$ by adding $( t , e )$ to the write-read relation. Figures A.1and A.2 list the rules defining $\Rightarrow$ . We distinguish between local computation rules (if-true, if-false and local) and database-accesses rules (begin, insert, select, update, delete, commit and abort); each associated to its homonymous instruction. Database-accesses get an increasing timestamp $\tau$ as well as an isolation-depending snapshot of the database using predicate snapshot ; updating adequately the timestamp and snapshot maps ( ${ \boldsymbol \mathsf { T } }$ and S respectively). Timestamps are used for validating the writes of a transaction and blocking inconsistent runs as well as for defining the set of possible snapshots any event can get. We use predicate readFrom for determining the values read by an event. Those reads depend on both the event’s snapshot as well as the timestamp of every previously executed event. Their formal definitions are described in Figure A.3. The begin rule starts a new transaction, provided that there is no other live transaction ( $\mathsf { B } = \epsilon$ ) in the same session. It adds an empty transaction log to the history and schedules the body of the transaction. if-true and if-false check the truth value of a Boolean condition of an if conditional. local handles the case where some local computation is required. insert, select, update and delete handle the database accesses. insert add some rows R in the history. select, update and delete read every key from a combination of its snapshot and the local writes defined by readFrom function. The predicate _ writes implicitly uses the previous information stored in the history via the function ${ \mathsf { v a l u e } } _ { { \mathsf { w r } } }$ . Finally commit and abort validate that the run of the transaction correspond to the isolation level specification. These rules may block in case the validation is not satisfied as the predicate valuation does not change with the application of posterior rules. An initial configuration for program $\mathsf { P }$ contains the program P along with a history $h = ( \{ t _ { 0 } \} , \emptyset , \emptyset )$ , where $t _ { 0 }$ is a transaction log containing only writes that write the initial values of all keys and whose timestamp and snapshot is 0 $\mathbf { S } , \mathbf { T } = \left[ t _ { 0 } \mapsto 0 \right] ,$ ), and it does not contain transaction code nor local keys $\gamma , \mathbf { B } = \varnothing ,$ ). A run $\rho$ of a program $\mathsf { P }$ is a sequence of configurations $c _ { 0 } c _ { 1 } \ldots c _ { n }$ where $c _ { 0 }$ is an initial configuration for $\mathsf { P }$ , and $c _ { m } \Rightarrow c _ { m + 1 }$ , for every $0 \leq m < n$ . We say that $c _ { n }$ is reachable from $c _ { 0 }$ . The history of such a run, history $( \rho )$ , is the history $h _ { n }$ in the last configuration $c _ { n }$ . A configuration is called final if it contains the empty program $\mathsf { P } = \emptyset$ ). Let $\mathsf { h i s t } ( \mathsf { P } )$ denote the set of all histories of a run of $\mathsf { P }$ that ends in a final configuration. $$ \begin{array} { r l } & { \operatorname { s u p s i n g i n t } _ { \mathbf { k } \leq \mathbf { T } ^ { \prime } \leq \mathbf { \nu } , \mathbf { k } ^ { \prime } \leq \mathbf { \nu } , \mathbf { f } ^ { \prime } = \operatorname { m a x } \left\{ \mathbf { T } ^ { \prime } ( \mathbf { c } _ { \kappa , \tau } ) \begin{array} { l } { \hfill \hfill \hfill \hfill \hfill } \hfill \hfill \right.} \\ { \hfill \mathrm { T } ^ { \prime } ( \mathbf { c } _ { \kappa , \tau } ) < \mathbf { T } ^ { \prime } ( \mathbf { c } ) \small } \\ { \hfill \hfill } \\ { \mathrm { S i n e g i n t } ( \mathbf { i } \cdot \mathbf { p } ) \hfill } \end{array} } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mathrm { o l l u e r w i s e } } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mathrm { o l l u e r w i s e } } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mathrm { ~ o l l u e r w i s e } } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mathrm { ~ o r e ~ i n ~ } } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mathrm { ~ o u l d ~ } \textbf { \to \kappa ^ { \prime } ( e ) } < \mathrm { T e e } \quad \hfill \hfill \left. \begin{array} { l } { \hfill } \\ { \mathrm { ~ T e ~ i n e g i n t } ( \mathbf { i } \cdot \mathbf { \lambda } ) } \\ { \hfill \mathrm { ~ s u p s i n t } ( \mathbf { i } \cdot \mathbf { \lambda } ) } \\ { \hfill \mathrm { ~ s u p s i n t } ( \mathbf { i } \cdot \mathbf { \lambda } ) , \ \mathbf { S } ^ { \top } ( \mathbf { c } _ { \kappa , \tau } ) = \mathrm { ~ c o n d e t w i s e } } \\ { \hfill \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mathrm { ~ o t h e r w i s e } } \end{array} \right. } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mathrm { ~ o t h e r w i s e } } \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mathrm { ~ f ~ } ( \mathbf { f } _ \kappa , \tau \end{array} $$ $$ \begin{array} { r } { \mathsf { l o c a l w r } [ x ] = \operatorname* { m a x } _ { \mathsf { p o } } \{ e \mid \mathsf { t r } ( e ) = t \ \wedge \ e \ \mathsf { w r i t e s } \ x \} \cup \{ \perp \} } \end{array} $$ $$ \begin{array} { r } { x \wedge \pmb { \mathsf { T } } ( w _ { x } ) = \operatorname* { m a x } \left\{ \pmb { \mathsf { T } } ( w ^ { \prime } ) ~ \middle | \begin{array} { r } { w ^ { \prime } \in \mathsf { e v e n t s } ( h ) \wedge w ^ { \prime } \mathsf { w r i t e s } ~ x \wedge } \\ { \pmb { \mathsf { T } } ( \mathsf { c o m i t } ( \pmb { \mathsf { t r } } ( w ^ { \prime } ) ) ) \leq \delta } \end{array} \right\} } \end{array} $$ $$ { \mathrm { v a l i d a t e s g a t } } ( h , \mathbf { T } ^ { \prime } , t ) = ( \begin{array} { l } { \mathcal { J } } \mathbf { \Delta } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi } \mathbf { \Psi \Psi } \mathbf { \Psi } \mathbf { \Psi \Psi } \mathbf { \Psi } \mathbf \end{array} $$ $$ \begin{array} { r } { \mathsf { v a l i d a t e s } \tau ( h , \mathsf { T } ^ { \prime } , t ) = \left( \begin{array} { l } { \mathcal { B } t ^ { \prime } \in h , x \in \mathsf { K e y s ~ s . t . ~ } t \mathrm { ~ w r i t e s ~ } x \mathrm { ~ \wedge ~ } t ^ { \prime } \mathrm { ~ w r i t e s ~ } x \mathrm { ~ \wedge ~ } } \\ { \qquad \wedge \mathsf { T } ^ { \prime } ( \mathsf { b e g i n } ( t ) ) < \mathsf { T } ^ { \prime } ( \mathsf { c o m m i t } ( t ^ { \prime } ) ) < \mathsf { T } ^ { \prime } ( \mathsf { e n d } ( t ) ) } \end{array} \right) } \end{array} $$ The proof of Theorem 1 is split in two parts: Lemma 2 and Lemma 4. In Lemma 2, we prove by induction that for any run $\rho$ , history $( \rho )$ is a full history; using the auxiliary Lemma 1 about pending transactions. We then define in Equation 5 a relation on transactions that plays the role of consistency witness for history $( \rho )$ . Then, we prove in Lemma 3 that such relation is a commit order for history $( \rho )$ to conclude in Lemma 4 that history $( \rho )$ is indeed consistent. In all cases, we do a case-by-case analysis depending on which rule is employed during the inductive step. For the sake of simplifying our notation, we denote by rule $( \rho , j , \rho ^ { \prime } )$ to the rule s.t. applied to run $\rho$ on session $j$ leads to configuration $\rho ^ { \prime }$ . Lemma 1. Let $\rho$ be a run and history $( \rho ) = ( T , \mathsf { s o } , \mathsf { w r } )$ be its history. Any pending transaction in $T$ is (so wr)-maximal. Proof. We prove by induction on the length of a run $\rho$ that any pending transaction is (so wr)-maximal; where history $( \rho ) = ( T , \mathsf { s o } , \mathsf { w r } )$ . The base case, where $\rho = \{ c _ { 0 } \}$ and $c _ { 0 }$ is an initial configuration, is immediate by definition. Let us suppose that for every run of length at most $n$ the property holds and let $\rho ^ { \prime }$ a run of length $n + 1$ . As $\rho ^ { \prime }$ is a sequence of configurations, there exist a reachable run $\rho$ of length $n$ , a session $j$ and a rule $r$ s.t. $r = \mathsf { r u l e } ( \rho , j , \rho ^ { \prime } )$ . Let us call $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ , $h ^ { \prime } = ( T ^ { \prime } , s o ^ { \prime } , \mathsf { w r } ^ { \prime } )$ and $e$ to history $( \rho )$ , history $( \rho ^ { \prime } )$ and the last event in po-order belonging to $\mathtt { l a s t } ( h , j )$ respectively. By induction hypothesis, any pending transaction in $h$ is $\left( \mathsf { s o U } \mathsf { w r } \right)$ -maximal. To conclude the inductive step, we show that for every possible rule $r$ s.t. $r = \mathsf { r u l e } ( \rho , j , \rho ^ { \prime } )$ , the property also holds in $h ^ { \prime }$ . – local, if-false, if-true, insert, commit, abort: The result trivially holds as $\mathsf { w r ^ { \prime } } = \mathsf { w r }$ , ${ \mathsf { s o } } ^ { \prime } = { \mathsf { s o } }$ and comp $\mathsf { I e t e } ( T ^ { \prime } ) \subseteq \mathsf { c o m p l e t e } ( T )$ . – begin: We observe that in this case, $T = T \cup \{ 1 { \mathsf { a s t } } ( h , j ) \}$ , $\begin{array} { r } { \mathsf { r e a d s } ( T ^ { \prime } ) = } \end{array}$ reads $( T )$ , $\mathsf { w r ^ { \prime } } = \mathsf { w r }$ and $\mathsf { s o } ^ { \prime } = \mathsf { s o } \cup \{ ( t ^ { \prime } , \mathsf { l a s t } ( h , j ) ) ~ | ~ \mathsf { s e s } ( t ^ { \prime } ) = j \}$ . Thus, last $( h , j )$ is pending and $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ -maximal. Moreover, as described in Figure A.1, $\boldsymbol { \mathsf { B } } ( j ) = \epsilon$ ; so there is no other transaction in session $j$ that is pending. Hence, as $T ^ { \prime } \backslash \mathsf { c o m p l e t e } ( T ^ { \prime } ) = T \mathsf { c o m p l e t e } ( T ) \cup \{ \mathsf { l a s t } ( h , j ) \}$ , by induction hypothesis, every pending transaction is ${ \mathsf { s o } } ^ { \prime } \cup { \mathsf { w r } } ^ { \prime }$ -maximal. – select, update, delete: Figure A.2 describes $h ^ { \prime }$ by the equation $h ^ { \prime } =$ $( h \oplus _ { j } ( e , \mathsf { r u l e } ( \rho , j , s ) ) ) \bigoplus _ { x \in \mathsf { K e y s } , \mathsf { w } [ x ] \neq \perp } \mathsf { w r } ( \mathsf { w } [ x ] , e ) .$ ; where $e$ is the new event executed and $\pmb { w }$ is defined following the descriptions in Figures A.2 and A.3. In this case, $T ^ { \prime } = T , \mathsf { r e a d s } ( T ^ { \prime } ) = \mathsf { r e a d s } ( T ) \cup \{ e \}$ , $\mathsf { s o } ^ { \prime } = \mathsf { s o }$ , $\forall x \in \mathsf { K e y s } \ \mathrm { s . t }$ . ${ \pmb w } [ x ] = \bot$ , $\mathsf { w r } _ { x } ^ { \prime } = \mathsf { w r } _ { x }$ and $\forall x \in \mathsf { K e y s }$ s.t. $\mathbf { w } [ x ] \neq \perp$ , ${ \mathsf { w r } } _ { x } ^ { \prime } = { \mathsf { w r } } _ { x } \cup \{ ( \mathbf { w } [ x ] , e ) \}$ . Note that as described by Figure A.3, in the latter case, when $\mathbf { w } [ x ] \neq \perp$ , $\mathsf { t r } ( \mathsf { w } [ x ] ) \in \mathsf { c o m p l e t e } ( T ) = \mathsf { c o m p l e t e } ( T ^ { \prime } )$ . In conclusion, using the induction hypothesis, we also conclude that every pending transaction is ${ \mathsf { s o } } ^ { \prime } \cup { \mathsf { w r } } ^ { \prime } .$ - maximal. Lemma 2. For every run $\rho$ , history $( \rho )$ is a full history. Proof. We prove by induction on the length of a run $\rho$ that history $( \rho )$ is a full history; where the base case, $\rho = \{ c _ { 0 } \}$ and $c _ { 0 }$ is an initial configuration, is trivial by definition. Let us suppose that for every run of length at most $n$ the property holds and let $\rho ^ { \prime }$ a run of length $n + 1$ . As $\rho ^ { \prime }$ is a sequence of configurations, there exist a reachable run $\rho$ of length $n$ , a session $j$ and a rule $r$ s.t. $r = \mathsf { r u l e } ( \rho , j , \rho ^ { \prime } )$ . Let us call $\boldsymbol { h } \ = \ ( T , \mathsf { s o , w r } )$ , $h ^ { \prime } = ( T ^ { \prime } , { \sf s o } ^ { \prime } , { \sf w r } ^ { \prime } )$ and $e$ to history $( \rho )$ , history $( \rho ^ { \prime } )$ and the last event in po-order belonging to $\mathtt { l a s t } ( h , j )$ respectively. By induction hypothesis, $h$ is a full history. To conclude the inductive step, we show that for every possible rule $r$ s.t. $r = \mathsf { r u l e } ( \rho , j , \rho ^ { \prime } )$ , the history $h ^ { \prime }$ is also a full history. In particular, by definitions 1 and 2, it suffices to prove that $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ is an acyclic relation and that for every variable $x$ and read event $r$ , $\mathsf { w r } _ { x } ^ { \prime } { } ^ { - 1 } ( r ) \downarrow$ if and only if $r$ does not read $x$ from a local write and in such case, $\mathsf { v a l u e } _ { \mathsf { w r } ^ { \prime } } ( \mathsf { w r } _ { x } ^ { \prime } { } ^ { - 1 } ( r ) , x ) \neq \perp$ . – local, if-false, if-true, insert, commit, abort: The result trivially holds as $\mathsf { r e a d s } ( T ^ { \prime } ) = \mathsf { r e a d s } ( T ) , \mathsf { w r } ^ { \prime } = \mathsf { w r }$ and $\mathsf { s o } ^ { \prime } = \mathsf { s o }$ ; using that $h$ is consistent. – begin: We observe that $h ^ { \prime } = h \oplus _ { j }$ (e, begin), so $T = T \cup \{ 1 { \mathsf { a s t } } ( h , j ) \}$ , $\mathsf { r e a d s } ( T ^ { \prime } ) = \mathsf { r e a d s } ( T )$ , ${ \mathsf { w r } } ^ { \prime } = { \mathsf { w r } }$ and $\mathsf { s o } ^ { \prime } = \mathsf { s o } \cup \{ ( t ^ { \prime } , 1 \mathsf { a s t } ( h , j ) ) \mid \mathsf { s e s } ( t ^ { \prime } ) = j \}$ . In such case, by Lemma 1, $t$ is $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ -maximal. Thus, $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ is acyclic as so ∪ wr is also acyclic. Finally, as $\mathsf { w r ^ { \prime } } = \mathsf { w r }$ , we conclude that $h ^ { \prime }$ is a full history. – select, update, delete: Here $h ^ { \prime }$ ( $h$ $\oplus _ { j }$ $( e , \mathsf { r u l e } ( \rho , j , s ) ) ) \bigoplus _ { x \in \mathsf { K e y s } , \mathsf { w } [ x ] \neq \perp } \mathsf { w r } ( \mathsf { w } [ x ] , e )$ where $e$ is the new event executed and $\boldsymbol { \mathsf { w } }$ is defined following the descriptions in Figures A.2 and A.3. In this case, $T ^ { \prime } = T , \mathsf { r e a d s } ( T ^ { \prime } ) = \mathsf { r e a d s } ( T ) \cup \{ e \}$ , $\mathsf { s o } ^ { \prime } = \mathsf { s o }$ , ∀x ∈ Keys s.t. $\boldsymbol { \mathsf { w } } [ \boldsymbol { x } ] = \perp$ , $\mathsf { w r } _ { x } ^ { \prime } = \mathsf { w r } _ { x }$ and $\forall x \in \mathsf { K e y s } \ \mathrm { s . t }$ . $\mathbf { w } [ x ] \neq \perp$ , ${ \mathsf { w r } } _ { x } ^ { \prime } = { \mathsf { w r } } _ { x } \cup \{ ( \mathbf { w } [ x ] , e ) \}$ . Note that as the timestamp of any event is always positive and ${ \pmb { \mathsf { T } } } ( \mathtt { i n i t } ) = 0$ ; for any key $x$ , $\mathsf { w } [ x ] \ne \perp$ if and only if $\mathtt { l o c a l W r } [ x ] = \mathtt { \perp }$ . Thus, $\pmb { w }$ is well defined, and $\mathsf { w r } _ { x } ^ { \prime } { } ^ { - 1 } ( r ) \downarrow$ if and only $\mathtt { l o c a l W r } [ x ] = \mathtt { \perp }$ . In such case, as any event $w$ writes on a key $x$ if and only if $\mathtt { v a l u e } _ { \mathsf { w r } } ( w , x ) \neq \bot$ , we conclude that $\mathsf { v a l u e } _ { \mathsf { w r } ^ { \prime } } ( { \mathsf { w r } _ { x } ^ { \prime } } ^ { - 1 } ( r ) , x ) \ne \perp$ . To conclude the result, we need to show that $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ is acyclic. As $\rho$ is reachable, by Figure A.3’s definition we know that for any event $r$ and key $x$ , if $\mathsf { w r } _ { x } ^ { - 1 } ( r ) \downarrow , \mathsf { t r } ( \mathsf { w r } _ { x } ^ { - 1 } ( r ) ) \in \mathsf { c m t t } ( h )$ . Thus, by Lemma 1, $\mathtt { l a s t } ( h , j )$ is $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ -maximal as it is not committed. Therefore, by the definition of ${ \mathsf { s o } } ^ { \prime }$ and ${ \mathsf { w r } } ^ { \prime }$ , as $\scriptstyle 5 0 \bigcup$ wr is acyclic and $\mathtt { l a s t } ( h , j )$ is $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ -maximal, $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ is also acyclic. In conclusion, $h ^ { \prime }$ is a full history. Once proven that for any run $\rho$ , history $( \rho )$ is a full history, we need to prove that there exists a commit order ${ \mathsf { c o } } _ { \rho }$ that witnesses history $( \rho )$ consistency. Equation 5 defines a relation that we prove in Lemma 3 that it is a total order for history $( \rho )$ . $$ ( t , t ^ { \prime } ) \in \mathsf { c o } _ { \rho } \iff \left\{ \begin{array} { l l } { t \in \mathsf { c o m p l e t e } ( T ) \wedge t ^ { \prime } \in \mathsf { c o m p l e t e } ( T ) \wedge } \\ { \qquad \mathbf { T } ( \mathsf { e n d } ( t ) ) < \mathbf { T } ( \mathsf { e n d } ( t ^ { \prime } ) ) } \\ { t \in \mathsf { c o m p l e t e } ( T ) \wedge t ^ { \prime } \notin \mathsf { c o m p l e t e } ( T ) } \\ { t \notin \mathsf { c o m p l e t e } ( T ) \wedge t ^ { \prime } \notin \mathsf { c o m p l e t e } ( T ) \wedge } \\ { \qquad \mathbf { T } ( \mathsf { b e g i n } ( t ) ) < \mathbf { T } ( \mathsf { b e g i n } ( t ^ { \prime } ) ) } \end{array} \right. $$ Lemma 3. For every run $\rho$ , the relation ${ \mathsf { c o } } _ { \rho }$ defined above is a commit order for history $( \rho )$ . Proof. We prove by induction on the length of a run $\rho$ that the relation ${ \mathsf { c o } } _ { \rho }$ defined by the equation below is a commit order for history $( \rho )$ , i.e., if history $( \rho ) =$ $( T , \mathsf { s o } , \mathsf { w r } )$ , then $\mathsf { s o U }$ wr $\subseteq \mathsf { c o } _ { \rho }$ . The base case, where $\rho$ is composed only by an initial configuration is immediate as in such case ${ \mathsf { w r } } = \emptyset$ . Let us suppose that for every run of length at most $n$ the property holds and let $\rho ^ { \prime }$ a run of length $n + 1$ . As $\rho ^ { \prime }$ is a sequence of configurations, there exist a reachable run $\rho$ of length $n$ , a session $j$ and a rule $\boldsymbol { \mathsf { r } }$ s.t. $\mathsf { r } = \mathsf { r u l e } ( \rho , j , \rho ^ { \prime } )$ . Let us call $h = ( T , { \sf s o } , { \sf w r } )$ , $h ^ { \prime } = ( T ^ { \prime } , { \sf s o } ^ { \prime } , { \sf w r } ^ { \prime } )$ and $e$ to history $( \rho )$ , history $( \rho ^ { \prime } )$ and the last event in po-order belonging to $\mathtt { l a s t } ( h , j )$ respectively. By induction hypothesis, ${ \mathsf { c o } } _ { \rho }$ is a commit order for $h$ . To conclude the inductive step, we show that $\mathsf { c o } _ { \rho ^ { \prime } }$ is also a commit order for $h ^ { \prime }$ . – local, if-false, if-true: As $h = h ^ { \prime }$ and $\bar { \boldsymbol { \mathsf { T } } } _ { \rho ^ { \prime } } = \boldsymbol { \mathsf { T } } _ { \rho }$ , $\mathsf { c o } _ { \rho ^ { \prime } } = \mathsf { c o } _ { \rho }$ . Thus, the result trivially holds. – begin: In this case, $e = { \tt b e g i n } ( { \tt l a s t } ( h , j ) )$ and $\mathsf { l a s t } ( h , j ) \notin \mathsf { c o m p l e t e } ( T _ { \rho ^ { \prime } } )$ . Note that for any event $\boldsymbol { e } ^ { \prime } \neq \boldsymbol { e }$ , ${ \boldsymbol { \mathsf { T } } } ( e ) > { \boldsymbol { \mathsf { T } } } ( e ^ { \prime } )$ and complete $( T _ { \rho } ^ { \prime } ) = \mathsf { c o m p l e t e } ( \rho )$ . Thus, $\mathsf { c o } _ { \rho ^ { \prime } } = \mathsf { c o } _ { \rho } \cup \{ ( t ^ { \prime } , 1 \mathsf { a s t } ( h ^ { \prime } , j ) ) \mid t ^ { \prime } \in T \}$ . As so ∪ wr $\subseteq \mathsf { c o } _ { \rho }$ , ${ \mathsf { w r } } ^ { \prime } = { \mathsf { w r } }$ and $\mathsf { s o } ^ { \prime } = \mathsf { s o } \cup \{ ( t ^ { \prime } , 1 \mathsf { a s t } ( h , j ) ) \mid \mathsf { s e s } ( t ^ { \prime } ) = j \}$ , $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime } \subseteq \mathsf { c o } _ { \rho ^ { \prime } }$ ; so $\mathsf { c o } _ { \rho ^ { \prime } }$ is a commit order for $h ^ { \prime }$ . – insert: In this case, as complete $( T _ { \rho } ^ { \prime } ) = { \mathsf { c o m p l e t e } } ( T _ { \rho } )$ , $\mathsf { c o } _ { \rho ^ { \prime } } = \mathsf { c o } _ { \rho }$ . Hence, as ${ \mathsf { s o } } ^ { \prime } = { \mathsf { s o } }$ and $\mathsf { w r ^ { \prime } } = \mathsf { w r }$ , $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime } \subseteq \dot { \mathsf { c o } } _ { \rho ^ { \prime } }$ . select, update, delete: Once again, as complete $( T _ { \rho } ^ { \prime } ) \ = \mathsf { c o m p l e t e } ( T _ { \rho } )$ , $\mathsf { c o } _ { \rho ^ { \prime } } = \mathsf { c o } _ { \rho }$ . Note that $\mathbf { \ s } \mathbf { o } ^ { \prime } = \mathbf { s } \mathbf { o }$ , $\forall x \in \mathsf { K e y s }$ s.t. $\mathbf { w } [ x ] = \perp$ , $\mathsf { w r } _ { x } ^ { \prime } = \mathsf { w r } _ { x }$ and $\forall x \in \mathsf { K e y s }$ s.t. $\mathbf { w } [ x ] \neq \perp$ , $\mathsf { w r } _ { x } ^ { \prime } = \mathsf { w r } _ { x } \cup \{ ( \boldsymbol { \mathsf { w } } [ x ] , e ) \}$ . In the latter case, where $\mathbf { w } [ x ] \neq \perp$ , we know that $\mathsf { t r } ( \mathbf { w } [ x ] ) \in \mathsf { c o m p l e t e } ( T )$ thanks to the definitions on Figure A.3. By Equation 5, as last $( h , j )$ is pending, we deduce that $( { \sf t r } ( { \bf w } [ x ] ) , { \sf t r } ( e ) ) \in \sf c o _ { \rho ^ { \prime } }$ . Therefore, as so ∪ wr $\subseteq \mathtt { c o } _ { \rho } = \mathtt { c o } _ { \rho ^ { \prime } }$ , we conclude that $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime } \subseteq \mathsf { c o } _ { \rho ^ { \prime } }$ . commit, abort: In this case, $e$ = endlast $( h , j )$ , $\mathsf { C O } _ { \rho ^ { \prime } } \ \mathsf { \Gamma } | _ { T \backslash \{ \mathsf { 1 a s t } ( h , j ) \} \times T \backslash \{ \mathsf { 1 a s t } ( h , j ) \} } = \mathsf { c o } _ { \rho }$ $\lceil T \backslash \{ \mathtt { l a s t } ( h , j ) \} \times T \backslash \{ 1 \mathtt { a s t } ( h , j ) \}$ , ${ \mathsf { s o } } ^ { \prime } = { \mathsf { s o } }$ and $\mathsf { w r ^ { \prime } } = \mathsf { w r }$ . Thus, to prove that ${ \mathsf { s o } } ^ { \prime } \cup { \mathsf { w r } } ^ { \prime } \subseteq { \mathsf { c o } } _ { \rho ^ { \prime } }$ we only need to discuss about $\mathtt { l a s t } ( h , j )$ . By Lemma 1, $\mathtt { l a s t } ( h , j )$ is $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ -maximal. Hence, we focus on proving that for any transaction $t ^ { \prime }$ s.t. $( t ^ { \prime } , 1 \mathsf { a s t } ( h , j ) ) \in \mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ , $( t ^ { \prime } , \mathsf { 1 a s t } ( h , j ) ) \ \in \ \mathsf { c o } _ { \rho ^ { \prime } }$ . Any such transaction $t ^ { \prime }$ must be completed by Lemma 1. However, by the definition on Figure A.1, we know that ${ \bf T } ( e ) > { \bf T } ( { \mathrm { e n d } } ( t ^ { \prime } ) )$ , so $( t ^ { \prime } , \mathsf { 1 a s t } ( h , j ) ) \in \mathsf { \Gamma } \mathsf { c o } _ { \rho ^ { \prime } }$ by Equation 5. Thus, so′ wr′ coρ′ . Lemma 4. For every total run $\rho$ , the history $( \rho )$ is consistent. Proof. Let $\rho ^ { T }$ be a total run. By Lemma 2, history $( \rho ^ { T } )$ is a full history. Thus, to prove that history $( \rho )$ is consistent, by Definition 6, we need to show that there exists a commit order co that witnesses its consistency. We prove by induction on the length of a prefix $\rho$ of a total run $\rho ^ { T }$ that the relation ${ \mathsf { c o } } _ { \rho }$ defined in Equation 5 is a commit order that witnesses history $( \rho )$ ’s consistency. Note that by Lemma 3, the relation ${ \mathsf { c o } } _ { \rho }$ is indeed a commit order. The base case, where $\rho$ is composed only by an initial configuration is immediate as in such case ${ \mathsf { w r } } = \emptyset$ . Let us suppose that for every run of length at most $n$ the property holds and let $\rho ^ { \prime }$ a run of length $n + 1$ . As $\rho ^ { \prime }$ is a sequence of configurations, there exist a reachable run $\rho$ of length $n$ , a session $j$ and a rule r s.t. $\mathsf { r } = \mathsf { r u l e } ( \rho , j , \rho ^ { \prime } )$ . Let us call $h = ( T , { \sf s o } , { \sf w r } )$ , $h ^ { \prime } = ( T ^ { \prime } , { \sf s o } ^ { \prime } , { \sf w r } ^ { \prime } )$ and $e$ to history $( \rho )$ , history $( \rho ^ { \prime } )$ and the last event in po-order belonging to $\mathtt { l a s t } ( h , j )$ respectively. By induction hypothesis, ${ \mathsf { c o } } _ { \rho }$ is a commit order that witnesses $h$ ’s consistency. To conclude the inductive step, we show that for every possible rule r s.t. $\mathsf { r } = \mathsf { r u l e } ( \rho , j , \rho ^ { \prime } )$ , $\mathsf { c o } _ { \rho ^ { \prime } }$ is a commit order witnessing $h ^ { \prime }$ ’s consistency. By contradiction, let suppose that $\mathsf { c o } _ { \rho ^ { \prime } }$ does not witness $h ^ { \prime }$ ’s consistency. Then, there exists a variable $x$ , a read event $r$ , an axiom $a \in \iota$ and two committed transactions $t _ { 1 } , t _ { 2 }$ s.t. $( t _ { 1 } , e ) \in \mathsf { w r } _ { x } ,$ $t _ { 2 }$ writes $x$ , $\mathsf { v i s } _ { a } ^ { \mathsf { c o } _ { \rho ^ { \prime } } } ( t _ { 2 } , r , x )$ holds in $h ^ { \prime }$ but $( t _ { 1 } , t _ { 2 } ) \in \mathsf { c o } _ { \rho ^ { \prime } }$ ; where $\iota = \mathsf { I } ( \mathsf { b e g i n } ( e ) )$ . Thus, if we prove that such dependencies can be seen in $h$ using ${ \mathsf { c o } } _ { \rho }$ , we obtain a contradiction as ${ \mathsf { c o } } _ { \rho }$ witnesses $h$ ’s consistency. Note that as shown during the proof of Lemma 3, ${ \mathsf { c o } } _ { \rho ^ { \prime } }$ $\begin{array} { r l } { _ { \mathcal { D } ^ { \prime } } } & { { } \big | _ { T \setminus \{ 1 \mathsf { a s t } ( h , j ) \} \times T \setminus \{ 1 \mathsf { a s t } ( h , j ) \} } = \mathsf { \ c o } _ { \rho } \quad \big | _ { T \setminus \{ 1 \mathsf { a s t } ( h , j ) \} \times T \setminus \{ 1 \mathsf { a s t } ( h , j ) \} } } \end{array}$ ; so we simply prove that $\mathtt { l a s t } ( h , j )$ cannot be $t _ { 1 }$ , $t _ { 2 }$ , $\mathsf { t r } ( r )$ or any intermediate transaction causing $\mathsf { v i s } _ { a } ^ { \mathrm { c o } _ { \rho ^ { \prime } } } ( t _ { 2 } , r , x )$ to hold in $h ^ { \prime }$ . – local, if-false, if-true: As $h = h ^ { \prime }$ and $\mathsf { c o } _ { \rho ^ { \prime } } = \mathsf { c o } _ { \rho }$ , this case is impossible. – begin: In this case, $\begin{array} { r c l r } { \mathtt { C O } _ { \pmb { \rho ^ { \prime } } } } & { = } & { \mathtt { C O } _ { \pmb { \rho } } \cup \big \{ \big ( t ^ { \prime } , \mathtt { l a s t } ( h ^ { \prime } , j ) \big ) \big | \big . t ^ { \prime } } & { \in { T } \big \} } \end{array}$ . By Lemma 3, $\mathtt { l a s t } ( h ^ { \prime } , j )$ is $\left( \mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime } \right)$ -maximal, so $\mathtt { l a s t } ( h ^ { \prime } , j ) \neq t _ { 1 }$ . Moreover, reads $\left( \mathtt { l a s t } ( h ^ { \prime } , j ) \right) = \varnothing$ , so $\boldsymbol { r } \neq \mathsf { r e a d s } ( \mathsf { l a s t } ( h ^ { \prime } , j ) )$ . In addition, $\mathtt { l a s t } ( h , j ) \neq t _ { 2 }$ as writes $( { \tt l a s t } ( h , j ) ) = \emptyset$ . $a = \mathsf { S }$ erializability, Prefix or Read Committed: In all cases, the axioms do not relate any other transactions besides $\overline { { t _ { 1 } } } , t _ { 2 }$ and $\mathsf { t r } ( r )$ , so this case is impossible. • $\underline { { a = \mathsf { C o n f l i c t } } }$ : In this case, $\mathtt { l a s t } ( h , j ) \neq t _ { 4 }$ as it is $\mathsf { c o } _ { \rho ^ { \prime } }$ -maximal; so this case is also impossible. – insert: In this case, $\begin{array} { r l r } { \mathrm { \Delta } _ { \mathrm { C O } _ { \rho ^ { \prime } } } } & { { } = } & { \mathrm { \Delta } _ { \mathrm { C O } _ { \rho } } } \end{array}$ . Moreover, $\begin{array} { l l l } { { \mathsf { r e a d s } ( T ^ { \prime } ) } } & { { = } } & { { \mathsf { r e a d s } ( T ) } } \end{array}$ , writes $ \langle T ^ { \prime } \rangle = \mathsf { w r i t e s } ( T )$ , $\mathsf { s o } ^ { \prime } = \mathsf { s o }$ and $\mathsf { w r ^ { \prime } } = \mathsf { w r }$ . Thus, this case is also impossible. – select, update, delete: In this case, $\mathsf { c o } _ { \rho ^ { \prime } } = \mathsf { c o } _ { \rho }$ , $\mathsf { s o } ^ { \prime } = \mathsf { s o }$ , $\forall x \in \mathsf { K e y s }$ s.t. $\boldsymbol { \mathsf { w } } [ \boldsymbol { x } ] = \perp$ , $\mathsf { w r } _ { x } ^ { \prime } \ = \ \mathsf { w r } _ { x }$ and $\forall x \in \mathsf { K e y s }$ s.t. $\mathbf { w } [ x ] \neq \perp$ , $\mathsf { w r } _ { x } ^ { \prime } \ = \ \mathsf { w r } _ { x } \ \cup$ $\{ ( { \bf w } [ x ] , e ) \}$ . As $\mathtt { l a s t } ( h , j )$ is pending, by Lemma 3, $\mathtt { l a s t } ( h , j ) \neq t _ { 1 }$ as it is $\left( \mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime } \right)$ -maximal. Moreover, as writes $( \mathtt { l a s t } ( h , j ) ) = \varnothing$ , $\mathtt { l a s t } ( h , j ) \neq$ $t _ { 2 }$ . Then, we analyze if $\mathtt { l a s t } ( h , j )$ can be $\mathsf { t r } ( r )$ (and thus, $\boldsymbol { r } ~ = ~ e$ ) or any intermediate transaction. Note that for all three isolation levels we study, readFrom returns the value written by the transaction with the last commit timestamp for a given snapshot time. Hence, as $( t _ { 1 } , r ) \in \mathsf { w r } _ { x }$ and $( t _ { 2 } , \mathsf { t r } ( r ) ) \in$ ${ \mathsf { c o } } _ { \rho }$ , we deduce that $\boldsymbol { \mathsf { T } } _ { \rho } \bigl ( \mathsf { c o m m i t } ( t _ { 2 } ) \bigr ) > \boldsymbol { \mathsf { T } } _ { \rho } \bigl ( \mathsf { b e g i n } ( \mathsf { l a s t } ( h , j ) ) \bigr ) ,$ . We continue the analysis distinguishing between one case per axiom: $a = \mathsf { S }$ erializability: As $\rho ^ { \prime }$ is a prefix of a total run $\rho ^ { T }$ , there exists runs $\hat { \rho } , \hat { \rho } ^ { \prime }$ s.t. r $\mathsf { u l e } ( \hat { \rho } , j ^ { \prime } , \hat { \rho } ^ { \prime } )$ is either commit or abort and both a prefix of $\rho ^ { T }$ ; where $j ^ { \prime }$ is the session of $\mathsf { t r } ( r )$ . Without loss of generality, we can assume that $\hat { \rho }$ and $\hat { \rho } ^ { \prime }$ have minimal size; so $\mathtt { l a s t } ( \mathsf { h i s t o r y } ( \hat { \rho } ) , j ^ { \prime } ) = \mathtt { t r } ( r )$ . As $\rho ^ { T }$ is total and $\hat { \rho } ^ { \prime }$ is a prefix of $\rho ^ { T }$ , validate $\mathsf { \Lambda } _ { \iota } ( \mathsf { h i s t o r y } ( \hat { \rho } , \mathsf { T } _ { \hat { \rho } ^ { \prime } } , \mathsf { t r } ( r ) ) )$ holds. By the monotonicity of $\boldsymbol { \mathsf { T } }$ $\begin{array} { r l r } { \boldsymbol { \mathsf { I } } , } & { { } \boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } } & { \subseteq \quad \boldsymbol { \mathsf { T } } _ { \hat { \rho } ^ { \prime } } } \end{array}$ . Hence, as $( t _ { 1 } , r )$ ∈ $\mathsf { w r } _ { x }$ and ${ \sf T } _ { \hat { \rho } ^ { \prime } } ( \mathrm { c o m m i t } ( t _ { 1 } ) ) ~ < ~ { \sf T } _ { \hat { \rho } ^ { \prime } } ( \mathrm { c o m m i t } ( t _ { 2 } ) )$ , by the definitions of Figure A.2 and Figure A.3 we deduce that ${ \sf T } _ { \hat { \rho } ^ { \prime } } ( \sf { b e g i n } ( \sf t r ( \boldsymbol { r } ) ) ) \quad <$ $\mathbf { T } _ { \hat { \rho } ^ { \prime } } \bigl ( \mathsf { c o m m i t } ( t _ { 2 } ) \bigr )$ . However, as ${ \sf T } _ { \hat { \rho } ^ { \prime } } ( \mathrm { b e g i n } ( \mathrm { t r } ( r ) ) ) < { \sf T } _ { \hat { \rho } ^ { \prime } } ( \mathrm { c o m m i t } ( t _ { 2 } ) ) <$ $\boldsymbol { \mathsf { T } } _ { \hat { \rho } ^ { \prime } } ( \mathsf { e n d } ( \mathsf { t r } ( r ) ) )$ , $\mathsf { t r } ( r )$ reads $x$ , $t _ { 2 }$ writes $x$ ; we conclude that validateSE $\mathbf { \dot { \mathbf { R } } } \big ( \mathsf { h i s t o r y } ( \hat { \rho } ^ { \prime } ) , \mathsf { T } _ { \hat { \rho } ^ { \prime } } , \mathsf { t r } ( r ) \big )$ does not hold; so this case is impossible. • $\underline { { a } } = \mathsf { C o n f l i c t }$ : In this case, $\mathtt { l a s t } ( h , j )$ cannot be an intermediate transaction nor tr $( r )$ as writes $( { \tt l a s t } ( h , j ) ) = \emptyset$ ; so this case is also impossible. $\underline { { a } } = \mathsf { P r e f i x }$ : In this case, last $( h , j )$ cannot be an intermediate transaction as by Lemma 1, $\mathtt { l a s t } ( h , j )$ is $\mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ -maximal. Thus, $\mathtt { l a s t } ( h , j )$ must be $\mathsf { t r } ( r )$ and $e = r$ . Therefore, there exists a transaction $t _ { 4 }$ s.t. $( t _ { 2 } , t _ { 4 } ) \in$ ${ \mathsf { c o } _ { \rho ^ { \prime } } } ^ { * }$ and $( t _ { 4 } , 1 \mathsf { a s t } ( h , j ) ) \in ( \mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime } )$ . Note that $t _ { 4 }$ must be committed and that $\boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 4 } ) ) < \boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } ( \mathsf { b e g i n } ( \mathsf { l a s t } ( h , j ) ) )$ . Hence, as $( t _ { 2 } , t _ { 4 } ) \in$ ${ \mathsf { c o } _ { \rho ^ { \prime } } } ^ { * }$ and $( t _ { 1 } , t _ { 2 } ) \in \mathsf { c o } _ { \rho ^ { \prime } }$ and they are both committed, we deduce that $\boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } \bigl ( \mathsf { c o m m i t } ( t _ { 2 } ) \bigr ) < \boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } \bigl ( \mathsf { c o m m i t } ( t _ { 4 } ) \bigr ) < \boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } \bigl ( \mathsf { b e g i n } ( \mathsf { l a s t } ( h , j ) ) \bigr )$ . However, this contradicts that ${ \pmb { \mathsf { T } } } _ { \rho ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 2 } ) ) > { \pmb { \mathsf { T } } } _ { \rho ^ { \prime } } ( \mathsf { b e g i n } ( \mathsf { l a s t } ( h , j ) ) )$ Thus, this case is impossible. a = Read Committed: In this case, $\mathtt { l a s t } ( h , j )$ must be $\mathsf { t r } ( r )$ and in particular, $e = r$ . As depicted on Figure A.2 and Figure A.3, as $( t _ { 1 } , r ) \in$ $\mathsf { w r } _ { x }$ , $\begin{array} { r } { \pmb { \mathsf { S } } _ { \rho ^ { \prime } } ( e ) \le \pmb { \mathsf { T } } _ { \rho ^ { \prime } } ( t _ { 1 } ) } \end{array}$ . However, as $( t _ { 1 } , t _ { 2 } ) \in \mathsf { c o } _ { \rho ^ { \prime } }$ , $\mathbf { T } _ { \rho ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 1 } ) ) \ <$ $\boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } \bigl ( \mathsf { c o m m i t } ( t _ { 2 } ) \bigr )$ . Hence, as $( t _ { 2 } , e ) \in ( \mathsf { s o } \cup \mathsf { w r } ) ; \mathsf { p o } ^ { * }$ , there exists an event $e ^ { \prime } \in \mathtt { l a s t } ( h , j )$ s.t. $( e , e ^ { \prime } ) \in { \mathfrak { p o } } ^ { * }$ and $\bar { \mathbf { T } } _ { \rho ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 2 } ) ) < \bar { \mathbf { T } } _ { \rho ^ { \prime } } ( e ^ { \prime } )$ . However, by snapshot $\ l _ { \tt R C }$ ’s definition, $\begin{array} { r } { \pmb { \mathsf { S } } ( e ^ { \prime } ) \ \le \ \pmb { \mathsf { S } } _ { \rho ^ { \prime } } ( e ) } \end{array}$ ; so we deduce that ${ \pmb { \mathsf { T } } } _ { \rho ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 1 } ) ) < { \pmb { \mathsf { T } } } _ { \rho ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 2 } ) ) < { \pmb { \mathsf { S } } } _ { \rho ^ { \prime } } ( e )$ . This contradicts the definition of readFrom; so this case is impossible. commit, abort: In this case, $\mathsf { c o } _ { \rho ^ { \prime } } \uparrow ( T \setminus \{ \mathsf { l a s t } ( h , j ) \} \times T \setminus \{ \mathsf { l a s t } ( h , j ) \} ) =$ $\mathsf { c o } _ { \rho } \mid ( T \setminus \{ 1 { \mathsf { a s t } } ( h , j ) \} \times T \setminus \{ 1 { \mathsf { a s t } } ( h , j ) \} )$ , ${ \mathsf { s o } } ^ { \prime } = { \mathsf { s o } }$ , $\mathsf { w r ^ { \prime } } = \mathsf { w r }$ . First, using that by induction hypothesis any prefix $\tilde { \rho }$ of $\rho$ is consistent using $\mathsf { c o } _ { \tilde { \rho } }$ ; we define $\tilde { \rho }$ the prefix of $\rho$ that introduces the read event $r$ . As history $( \tilde { \rho } ) =$ $( \tilde { T } , \tilde { \mathsf { s o } } , \tilde { \mathsf { w r } } )$ is consistent and $( t _ { 1 } , r ) \in \tilde { \mathsf { w r } } _ { x } ; t$ $t _ { 1 }$ is committed. Hence, by the definitions of readFrom and snapshot $\mathbf { \Phi } _ { \cdot } \mathbf { \Phi } _ { \cdot }$ on Figure A.3 and the rules semantics on Figure A.2, we deduce that $\boldsymbol { \mathsf { T } } _ { \rho } ( \mathsf { c o m m i t } ( t _ { 1 } ) ) > \boldsymbol { \mathsf { T } } _ { \rho } ( \mathsf { b e g i n } ( t _ { 2 } ) )$ . Next, as $\mathtt { l a s t } ( h , j )$ is pending in $h$ , it is so wr-maximal. Therefore, it is also ${ \mathsf { s o } } ^ { \prime } \cup { \mathsf { w r } } ^ { \prime }$ - maximal; so it cannot play the role of $t _ { 1 }$ . However, it can play the role of $t _ { 2 }$ , $\mathtt { l a s t } ( h , j )$ or the role of an intermediate transaction. Let us analyze case by case depending on the axiom: $a = \mathsf { S }$ erializability: Two sub-cases arise: $\overline { { * \ 1 \mathtt { a s t } ( h , j ) = t _ { 2 } } }$ : I this case, $t _ { 2 }$ writes $x$ must hold. As $\rho ^ { \prime }$ is a prefix of a total run $\rho ^ { T }$ , there exists runs $\hat { \rho } , \hat { \rho } ^ { \prime }$ s.t. rule $( \hat { \rho } , \boldsymbol { j } ^ { \prime } , \hat { \rho } ^ { \prime } )$ is either commit or abort and both a prefix of $\rho ^ { T }$ ; where $j ^ { \prime }$ is the session of $\mathsf { t r } ( r )$ . Without loss of generality, we can assume that $\hat { \rho }$ and $\hat { \rho } ^ { \prime }$ have minimal size; so $\mathtt { l a s t } ( \mathsf { h i s t o r y } ( \hat { \rho } ) , j ^ { \prime } ) = \mathtt { t r } ( r )$ . As $\rho ^ { T }$ is total and $\hat { \rho } ^ { \prime }$ is a prefix of $\rho ^ { T }$ , validate $\mathsf { \Omega } _ { \iota } \bigl ( \mathsf { h i s t o r y } ( \hat { \rho } , \mathsf { T } _ { \hat { \rho } ^ { \prime } } , \mathsf { t r } ( r ) ) \bigr )$ holds. Note that as $( t _ { 1 } , t _ { 2 } ) \in \mathsf { c o } _ { \rho ^ { \prime } }$ and they are both committed, $\boldsymbol { \mathsf { T } } _ { \hat { \rho } ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 1 } ) ) < \boldsymbol { \mathsf { T } } _ { \hat { \rho } ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 2 } ) )$ . However, $\mathsf { t r } ( r )$ reads $x , \ t _ { 2 }$ writes $x$ and $\begin{array} { r l } { \mathbf { T } _ { \hat { \rho } ^ { \prime } } ( \mathsf { b e g i n } ( \mathsf { t r } ( r ) ) ) } & { { } < } \end{array}$ $\bar { \mathbf T } _ { \hat { \rho } ^ { \prime } } \bigl ( \mathsf { c o m m i t } \bigl ( \mathsf { l a s t } ( h , j ) \bigr ) \bigr ) \ < \ \bar { \mathbf T } _ { \hat { \rho } ^ { \prime } } \bigl ( \mathsf { c o m m i t } ( \mathsf { t r } ( r ) ) \bigr ) \bigr )$ ; which contradicts that validate $\mathsf { \Lambda } _ { \iota } ( \mathsf { h i s t o r y } ( \hat { \rho } , \mathsf { T } _ { \hat { \rho } ^ { \prime } } , \mathsf { t r } ( r ) ) )$ holds. In conclusion, this case is impossible. $\mathbf { \Psi } * \mathtt { l a s t } ( h , j ) = \mathtt { t r } ( r )$ : In such case, as $t _ { 1 }$ and $t _ { 2 }$ are committed, $\overline { { ( t _ { 2 } , 1 \mathsf { a s t } ( h , j ) ) \ \in } } \ \subset \mathsf { o } _ { \rho }$ and $( t _ { 1 } , t _ { 2 } ) \ \in \ \mathsf { c o } _ { \rho }$ . Hence, this case is also impossible as ${ \mathsf { c o } } _ { \rho }$ witnesses that $h$ is consistent. • $\underline { { a } } = \mathsf { P r e f i x }$ : In this case, there exists a transaction $t _ { 4 }$ s.t. $( t _ { 2 } , t _ { 4 } ) \in \mathrm { c o } _ { \rho ^ { \prime } } ^ { * }$ and $( t _ { 4 } , \mathsf { t r } ( r ) ) \in \mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime }$ . As $\mathtt { l a s t } ( h , j )$ is pending in $h$ , by Lemma $1$ , (so ∪ wr)-maximal. Thus, as $\mathsf { s o } ^ { \prime } = \mathsf { s o }$ and $\mathsf { w r ^ { \prime } } = \mathsf { w r }$ , $t _ { 4 } ~ \neq ~ 1 \mathbf { a s t } ( h , j )$ . Moreover, as $( t _ { 2 } , t _ { 4 } ) \ \in \ \mathsf { c o } _ { \rho ^ { \prime } } { } ^ { * }$ , $t _ { 4 }$ is committed and $\mathsf { l a s t } ( h , j ) ~ \neq ~ t _ { 4 }$ is the $\mathsf { c o } _ { \rho ^ { \prime } }$ -maximal transaction that is committed, $t _ { 2 } ~ \ne ~ 1 \mathsf { a s t } ( h , j )$ . Hence, $\mathsf { l a s t } ( h , j ) ~ = ~ \mathsf { t r } ( r )$ . However, as $\begin{array} { l l l } { \mathsf { s o } ^ { \prime } } & { = } & { \mathsf { s o } } \end{array}$ , $\mathsf { w r ^ { \prime } } = \mathsf { w r ^ { \prime } }$ and $\begin{array} { r } { \mathsf { C O } _ { \rho ^ { \prime } } \mathsf { \Gamma } \lceil { T } \backslash \{ \mathsf { 1 a s t } ( h , j ) \} \times T \backslash \{ \mathsf { 1 a s t } ( h , j ) \} = \mathsf { C O } _ { \rho } } \end{array}$ $\lceil \gamma _ { \backslash \{ \mathsf { L a s t } ( h , j ) \} \times T \backslash \{ \mathsf { L a s t } ( h , j ) \} }$ ; we conclude that $( t _ { 1 } , t _ { 2 } ) \in \mathsf { c o } _ { \rho }$ , $( t _ { 2 } , t _ { 4 } ) \in \mathsf { c o } _ { \rho } { } ^ { * }$ and $( t _ { 4 } , 1 \mathsf { a s t } ( h , j ) ) \in \mathsf { s o U w r }$ ; which contradicts that ${ \mathsf { c o } } _ { \rho }$ witnesses $h$ ’s consistency, so this case is impossible. • $\underline { { a } } = \mathsf { C o n f l i c t }$ : In this case, there exists a variable $y$ and a transaction $t _ { 4 }$ s.t. $t _ { 4 }$ writes $y$ , $\mathsf { t r } ( r )$ writes $y$ $\left( t _ { 2 } , t _ { 4 } \right) \ \in \ \mathsf { c o } _ { \rho ^ { \prime } } ^ { * }$ , $( t _ { 4 } , \mathsf { t r } ( r ) ) \in$ $\mathsf { c o } _ { \rho ^ { \prime } }$ . As $\mathtt { l a s t } ( h , j )$ is the $\mathsf { c o } _ { \rho ^ { \prime } }$ -maximal transaction that is committed, $( t _ { 2 } , { \sf t r } ( r ) ) , ( t _ { 4 } , { \sf t r } ( r ) ) \in \mathrm { ~ \sf { c o } } _ { \rho ^ { \prime } }$ and write $\mathsf { s } ( \mathsf { t r } ( r ) ) \mathsf { \Omega } \ne \mathsf { \Omega } \varnothing$ , we deduce that $\mathsf { l a s t } ( h , j ) ~ \neq ~ t _ { 2 } , t _ { 4 }$ . Hence, $\mathtt { l a s t } ( h , j )$ must be $\mathsf { t r } ( r )$ and $\boldsymbol { \mathscr { e } } ^ { \mathrm { ~ ~ } } =$ commi $\mathtt { \backslash } ( \mathtt { l a s t } ( h , j ) )$ . On one hand, we observe that as $( t _ { 4 } , 1 \mathsf { a s t } ( h , j ) ) \in$ $\mathsf { c o } _ { \rho ^ { \prime } }$ and they are both committed, $\boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 4 } ) ) < \boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } ( e )$ . On the other hand, as $( t _ { 2 } , t _ { 4 } ) \in \mathsf { c o } _ { \rho ^ { \prime } } ^ { * }$ and $\boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } ( \mathsf { b e g i n } ( \mathsf { t r } ( r ) ) ) < \boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 2 } ) )$ ; we conclude that $\boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } ( \mathsf { b e g i n } ( \mathsf { t r } ( r ) ) ) < \boldsymbol { \mathsf { T } } _ { \rho ^ { \prime } } ( \mathsf { c o m m i t } ( t _ { 4 } ) )$ . In conclusion, we obtain that validate $\operatorname { s r } ( h ^ { \prime } , \mathsf { T } _ { \rho ^ { \prime } } , \mathtt { l a s t } ( h , j ) )$ does not hold due to the existence of $t _ { 4 }$ ; which contradict the hypothesis, so this case is impossible. • $\underline { { a } } = \mathsf { R e a d }$ Committed: In this case, $r \neq e$ as $r$ is a read event and $e$ is not, and $( t _ { 2 } , r ) \in ( \mathsf { s o } ^ { \prime } \cup \mathsf { w r } ^ { \prime } ) ; \mathsf { p o } ^ { \prime } { } ^ { * }$ . Hence, as ${ \mathsf { s o } } ^ { \prime } = { \mathsf { s o } } , { \mathsf { w r } } ^ { \prime } = { \mathsf { w r } }$ and ${ \mathsf { p o } } ^ { \prime } = { \mathsf { p o U } }$ $\{ ( e ^ { \prime } , e ) \mid e ^ { \prime } \in \mathtt { l a s t } ( h , j ) \}$ ; $( t _ { 2 } , r ) \in ( \mathsf { s o } \cup \mathsf { w r } ) ;$ po∗. Finally, as $\mathtt { l a s t } ( h , j )$ is pending in $h$ , $\mathtt { l a s t } ( h , j ) \neq t _ { 2 }$ . Thus, as $\mathsf { c o } _ { \rho ^ { \prime } }$ $\scriptstyle \lceil { \boldsymbol { T } } \backslash \{ 1 \mathsf { a s t } ( h , j ) \} \times { \boldsymbol { T } } \backslash \{ 1 \mathsf { a s t } ( h , j ) \} =$ $\begin{array} { r l } { \mathsf { C O } _ { \rho } } & { { } \lceil _ { T \backslash \{ 1 \mathsf { a s t } ( h , j ) \} \times T \backslash \{ 1 \mathsf { a s t } ( h , j ) \} } } \end{array}$ ; we deduce that $( t _ { 1 } , t _ { 2 } ) \in \mathsf { c o } _ { \rho }$ . However, this contradicts that ${ \mathsf { c o } } _ { \rho }$ witnesses $h$ ’s consistency; so this case is also impossible. As every possible case is impossible, we deduce that the hypothesis, $\mathsf { c o } _ { \rho ^ { \prime } }$ does not witnesses $h ^ { \prime }$ ’s consistency is false; so we conclude the proof of the inductive step. # B Proofs of Theorems 2 to 5 # B.1 Complexity analysis of Algorithms 1 and 2 (Proof of Theorem 2) For a given history $h$ , Algorithm 1 computes necessary and sufficient conditions for an execution of $\iota \xi = ( h , \scriptscriptstyle { \mathbb { C } } 0 )$ to be consistent. It computes a bigger relation $\mathsf { p c o } _ { \tt r e s }$ that includes co and any other dependency between transactions that can be deduced from the isolation configuration. Algorithm 1 decides if $\mathtt { C O }$ is a commit order witnessing consistency of the history (Lemma 5) and it runs in polynomial time (Lemma 7). Lemma 5. For any full history $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ , the execution $\xi ~ = ~ ( h , \subset { \circ } )$ is consistent if and only if $\mathsf { p c o } _ { \mathbf { r e s } } = \mathrm { S A T U R A T E } ( h , \mathsf { c o } )$ is acyclic. Proof. Let $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ be a history, $\xi = ( h , \mathsf { c o } )$ be an execution of $h$ and $\mathsf { p c o } _ { \mathbf { r e s } } = \mathrm { S A T U R A T E } ( h , \mathsf { c o } )$ be the relation obtained thanks to Algorithm 1. Let us suppose that $\xi$ is consistent. As $\ b { \subset } \ b { \mathrm { O } }$ is acyclic, it suffice to prove that $\mathsf { p c o } _ { \mathbf { r e s } } = \mathsf { c o }$ . By contradiction, let us suppose that $\mathsf { p c o } _ { \mathbf { r e s } } \neq \mathsf { c o }$ . As $\mathsf { c o } \subseteq$ $\mathsf { p c o } _ { \tt r e s }$ (line 2), there exists $t _ { 1 } , t _ { 2 }$ s.t. $( t _ { 2 } , t _ { 1 } ) \in \mathsf { p c o } _ { \mathbf { r e s } } \ \backslash \ \mathsf { c o } .$ . In such case, such tuple must be added in line 8. Hence, there exists $x \in { \mathsf { K e y s } } , e \in { \mathsf { r e a d s } } ( h )$ and $v \in \mathsf { v i s } ( \mathsf { i s o } ( h ) ( \mathsf { t r } ( r ) ) )$ s.t. $t _ { 1 } ~ = ~ \mathsf { w r } _ { x } ^ { - 1 } ( r )$ and $\mathsf { v i s } _ { a } ^ { \mathsf { c o } } ( t _ { 2 } , r , x )$ holds in $h$ . As $\xi$ is consistent, $( t _ { 2 } , t _ { 1 } ) \in \mathsf { c o }$ ; which is impossible. Hence, $\mathsf { p c o } _ { \mathsf { r e s } } = \mathsf { c o }$ . = Let us suppose that $\mathsf { p c o } _ { \mathtt { r e s } }$ is acyclic. By contradiction, let us suppose that $\xi$ is not consistent. Then, there exists an read event $r$ s.t. $C _ { \mathsf { i s o } ( h ) ( \mathsf { t r } ( r ) ) } ^ { \mathsf { c o } } ( r )$ does not hold. Hence, by Equation (1), there exists $v \in \mathsf { v i s } ( \mathsf { i s o } ( h ) ( \mathsf { t r } ( r ) ) )$ , $x \in { \mathsf { K e y s } }$ , $t _ { 2 } \in T$ s.t. $\mathsf { v } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ hold in $h$ but $( t _ { 2 } , t _ { 1 } ) \notin \mathrm { c o }$ ; where $t _ { 1 } = \mathsf { w r } _ { x } ^ { - 1 } ( r )$ . In such case, Algorithm 1 ensures in line 8 that $( t _ { 2 } , t _ { 1 } ) \in \mathsf { p c o } _ { \mathbf { r e s } }$ . However, as $\mathsf { c o } \subseteq \mathsf { p c o } _ { \mathtt { r e s } }$ (line 2), co is a total order and $\mathsf { p c o } _ { \tt r e s }$ is acyclic, $\mathsf { c o } = \mathsf { p c o } _ { \mathbf { r e s } }$ . Thus, $( t _ { 2 } , t _ { 1 } ) \in \mathrm { c o }$ ; which is impossible. Thus, $\xi$ is consistent. □ Lemma 6. Let $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ be a history s.t. $\mathfrak { i } \mathfrak { s o } ( h )$ is bounded by $k \in \mathbb { N }$ , $x \in$ Keys be a key, $t \in T$ be a transaction, $r$ be a read event, $\mathsf { p c o } \subseteq T \times T$ be a partial order and $v$ be a visibility relation in vis $( \mathsf { i } \mathsf { s o } ( h ) ( \mathsf { t r } ( r ) ) )$ . Evaluating $\mathsf { v } ( \mathsf { p c o } ) ( t , r , x )$ is in $\mathcal { O } ( | h | ^ { k - 2 } )$ . Proof. As $\mathfrak { i } \mathfrak { s o } ( h )$ is bounded, there exists $k \in \mathbb { N }$ s.t. $| \mathsf { v i s } ( \mathsf { i s o } ( h ) ( t ) ) | \leq k$ . Hence, the number of quantifiers employed by a visibility relation is at most $k$ (and at least 3 according to Equation 1). In addition, for each $\mathsf { v } \in \mathsf { v i s } ( \mathsf { i s o } ( h ) ( t ) )$ evaluating each condition $\mathsf { v } ( \mathsf { p c o } ) ( t , r , x )$ can be modelled with an algorithm that employ $k - 3$ nested loops, one per existential quantifier employed by $\mathsf { v }$ , and that for each quantifier assignment evaluates the quantifier-free part of the formula. First, we observe that as WrCons predicate only query information about the $k - 1$ quantified events, the size of such sub-formula is in ${ \mathcal { O } } ( k )$ . Next, we notice that as WHERE predicate can be evaluated in constant time, for every key $x$ and event $w$ , computing $\mathtt { v a l u e } _ { \mathsf { w r } } ( x , w )$ is in $\mathcal { O } ( k \cdot T )$ . Hence, as $k$ is constant, evaluating the quantifier-free formula of $v$ is in $\mathcal { O } ( | h | )$ and thus, evaluating $\mathsf { v } ( \mathsf { p c o } ) ( t , r , x )$ is in $\mathcal { O } ( \vert h \vert ^ { k - 3 } \cdot \vert h \vert ) = \mathcal { O } ( \vert h \vert ^ { k - 2 } )$ . □ Lemma 7. Let $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ be a full history, $k$ be a bound on $\mathfrak { i s o } ( h )$ and $\mathsf { p c o } \subseteq T \times T$ be a partial order. Algorithm 1 runs in $\mathcal { O } ( | h | ^ { k + 1 } )$ . Proof. Let $h = ( T , { \sf s o } , { \sf w r } )$ be a full history. Algorithm 1 can be decomposed in two blocks: lines 4-8 and lines 6-8. Hence, the cost of Algorithm 1 is in $\mathcal { O } ( | \mathsf { K e y s } | \cdot$ $| \mathsf { e v e n t s } ( h ) | \cdot | T | \cdot U )$ ; where $U$ is an upper-bound of the cost of evaluating lines 6-8. On one hand, both Keys , $| \mathsf { e v e n t s } ( h ) |$ and $| T |$ are in $\mathcal { O } ( | h | )$ . On the other hand, as $\mathfrak { i } \mathfrak { s o } ( h )$ is bounded by $k$ , by Lemma $6$ , $U \in { \mathcal { O } } ( | h | ^ { k - 2 } )$ . Altogether, we deduce that Algorithm 1 runs in $\mathcal { O } ( | h | ^ { k + 1 } )$ . □ Algorithm 2 generalizes the results for RA and RC in [7] for full histories with heterogeneous saturable isolation configurations; proving that such histories can be checked in polynomial time. Theorem 2. Checking consistency of full histories with bounded saturable isolation configurations can be done in polynomial time. We split the proof of Theorem 2 in two Lemmas: Lemma 8 that proves the correctness of Algorithm 2 and Lemma 9 that ensures its polynomial-time behavior. Lemma 8. For every full history $h = ( T , { \sf s o } , { \sf w r } )$ whose isolation configuration is saturable, Algorithm $\mathcal { L }$ returns true if and only if $h$ is consistent. Proof. Let $h = ( T , \mathsf { s o } , \mathsf { w r } )$ a full history whose isolation configuration is saturable and let pco be the visibility relation defined in line 3 in Algorithm 2. On one hand, let suppose that $h$ is consistent and let $\xi = ( h , \mathsf { c o } )$ be a consistent execution of $h$ . If we show that $\mathsf { p c o } \subseteq \mathsf { c o }$ , we can conclude that Algorithm 2 returns true as co is acyclic. Let $( t _ { 2 } , t _ { 1 } ) \in$ pco and let us prove that $( t _ { 2 } , t _ { 1 } ) \in \mathrm { c o }$ . As so wr $\subseteq { \mathsf { c o } }$ , by the definition of commit order, we can assume that $( t _ { 2 } , t _ { 1 } ) \in \mathsf { p c o } \backslash ( \mathsf { s o U w r } )$ . In such case, there must exists $x \in { \mathsf { K e y s } } , e \in { \mathsf { r e a d s } } ( h )$ and $\mathsf { v } \in \mathsf { v i s } ( \mathsf { i s o } ( h ) ( \mathsf { t r } ( e ) ) )$ s.t. $t _ { 2 }$ writes $x$ and $\mathsf { v } ( ( \mathsf { s o } \cup \mathsf { w r } ) ^ { + } ) ( t _ { 2 } , e , x )$ holds. As $\mathsf { i } \mathsf { s o } ( h ) ( \mathsf { t r } ( e ) )$ is saturable, $\mathsf { v } ( ( \mathsf { s o } \cup \mathsf { w r } ) ^ { + } ) ( t _ { 2 } , e , x )$ holds. Hence, as co is a commit order and $( \mathsf { s o } \cup \mathsf { w r } ) ^ { + } \subseteq \mathsf { c o }$ ; $\mathsf { v } ( \mathsf { c o } ) ( t _ { 2 } , e , x )$ also holds. Therefore, as co witnesses $h$ ’s consistency, we deduce that $( t _ { 2 } , t _ { 1 } ) \in \mathrm { c o }$ . On the other hand, let us suppose that Algorithm 2 returns true. Then, pco must be acyclic by the condition in line 4. Therefore, as pco is acyclic it can be extended to a total order co. Let us prove that the execution $\xi = ( h , \mathsf { c o } )$ is consistent. Let $x \in { \mathsf { K e y s } } , t _ { 2 } \in T , e \in { \mathsf { r e a d s } } ( h )$ and $\mathsf { v } \in \mathsf { v i s } ( \mathsf { i s o } ( h ) ( \mathsf { t r } ( e ) ) )$ s.t. $t _ { 2 }$ writes $x$ and $\mathsf { v } ( \mathsf { c o } ) ( t _ { 2 } , e , x )$ holds. As Algorithm 2 returns true, we deduce that Algorithm 1 checks the condition at line 7. As $\mathsf { i } \mathsf { s o } ( h ) ( \mathsf { t r } ( e ) )$ is saturable, $\mathsf { v } ( ( \mathsf { s o } \cup \mathsf { w r } ) ^ { + } ) ( t _ { 2 } , e , x )$ also holds. Thus, $( t _ { 2 } , t _ { 1 } ) \in \mathsf { p c o }$ . As $\mathsf { p c o } \subseteq \mathsf { c o }$ , $( t _ { 2 } , t _ { 1 } ) \in \mathrm { c o }$ ; so $\scriptstyle { \mathsf { C O } }$ witnesses $h$ ’s consistency. Lemma 9. For every full history h whose isolation configuration is bounded, Algorithm $\mathcal { L }$ runs in polynomial time with respect $\mathcal { O } ( | h | )$ . Proof. Let $h = ( T , \mathsf { s o } , \mathsf { w r } )$ be a full history whose isolation configuration is saturable. First, we observe that checking if a graph $G = ( V , E )$ is acyclic can be easily done with a DFS in $\mathcal { O } ( | V | + | E | )$ . Thus, the cost of checking acyclicity of both so ∪ wr (line 2) and pco (line 4) is in $\mathcal { O } ( | T | + | T | ^ { 2 } ) = \mathcal { O } ( | T | ^ { 2 } ) \subseteq \mathcal { O } ( | h | ^ { 2 } )$ . Furthermore, by Lemma 7, the cost of executing Algorithm $^ { 1 }$ is in $\mathcal { O } ( | h | ^ { k + 1 } )$ ; where $k$ is a bound in $\mathfrak { i } \mathfrak { s o } ( h )$ . Thus, checking $h$ ’s consistency with Algorithm 2 can be done in polynomial time. □ # B.2 Proof of Theorem 3 Theorem 3. Checking consistency of bounded-width client histories with bounded isolation configuration stronger than RC and width(h) 3 is NPcomplete. The proof of Theorem 3 is structured in two parts: proving that the problem is in NP and proving that is NP-hard. The first part corresponds to Lemma 10; which is analogous as the proof of Lemma 18. The second part, based on a reduction to 1-in-3 SAT problem, corresponds to Lemmas 11, 13 and 17. Lemma 10. The problem of checking consistency for a bounded width client history h with an isolation configuration stronger than RC and width $. ( h ) \geq 3$ is in NP. Proof. Let $\textit { h } \ = \ ( T , 5 0 , \mathsf { w r } )$ a client history whose isolation configuration is stronger than RC. Guessing a witness of $h$ , $\bar { h }$ , and an execution of $\bar { h }$ , $\xi = ( { \overline { { h } } } , \mathsf { c o } )$ , can be done in $\mathcal { O } ( | { \mathsf { K e y s } } | \cdot | { \mathsf { e v e n t s } } ( h ) | ^ { 2 } + | T | ^ { 2 } ) \subseteq \mathcal { O } ( | h | ^ { 3 } )$ . By Lemma 5, checking if $\xi$ is consistent is equivalent as checking if saturate $( h ^ { \prime } , \mathsf { c o } )$ is an acyclic relation. As by Lemma 7, Algorithm 1 requires polynomial time, we conclude the result. For showing NP-hardness, we will reduce 1-in-3 SAT to checking consistency. Let $\varphi$ be a boolean formula with $n$ clauses and $m$ variables of the form $\varphi =$ $\bigwedge _ { i = 1 } ^ { n } ( v _ { i } ^ { 0 } \lor v _ { i } ^ { 1 } \lor v _ { i } ^ { 2 } )$ ; we construct a history $h _ { \varphi }$ s.t. $h _ { \varphi }$ is consistent if and only if $\varphi$ is satisfiable with exactly only one variable assigned the value true. The key idea is designing a history with width 3 that is stratified in rounds, one per clause. In each round, three transactions, one per variable in the clause, “compete” to be first in the commit order. The one that precedes the other two correspond to the variable in $\varphi$ that is satisfied. First, we define the round $0$ corresponding to the variables of $\varphi$ . For every variable $x _ { i } \in \mathsf { v a r } ( \varphi ) , 1 \le i \le m$ we define an homonymous key $x _ { i }$ that represents such variable. Doing an abuse of notation, we say that $x _ { i } \in \mathsf { v a r } ( \varphi )$ . Then, we create two transactions ${ { \mathrm { 1 } } _ { i } }$ and $0 _ { i }$ associated to the two states of $x _ { i }$ , $^ { 1 }$ and $0$ . The former contains the event INSERT $( \{ x _ { i } : 1 , 1 _ { i } : 1 \} ) \times$ ) while the latter INSERT $( \{ x _ { i } : 0 , 0 _ { i } : 1 \} )$ . Both $1 _ { i }$ and $0 _ { i }$ write also on a special key named ${ { 1 } _ { i } }$ and $0 _ { i }$ respectively to indicate on the database that they have committed. Next, we define rounds $1 - n$ representing each clause in $\varphi$ . For each clause $C _ { i } : = ( v _ { i } ^ { 0 } \vee v _ { i } ^ { 1 } \vee v _ { i } ^ { 2 } ) , 1 \le i \le n$ , we define the round $i$ . Round $i$ is composed by three transactions: $t _ { i } ^ { 0 }$ , $t _ { i } ^ { 1 }$ and $t _ { i } ^ { 2 }$ , representing the choice of the variable among $v _ { i } ^ { 0 } , v _ { i } ^ { 1 }$ and $v _ { i } ^ { 2 }$ that is selected in the clause $C _ { i }$ . Transactions $t _ { i } ^ { j }$ write on keys $v _ { i } ^ { \ j }$ and $v _ { i } ^ { j + 1 }$ mod 3 to preserve the structure of the clause $C _ { i }$ , as well on the special homonymous key $t _ { i } ^ { j }$ to indicate that such transaction has been executed; in a similar way as we did in the round 0. For that, we impose that transactions $t _ { i } ^ { j }$ are composed of an event SELECT(λx : eq(x, vij , vij+1 mod 3, vij+2 mod 3)) followed by an event INSER $\Gamma ( \{ v _ { i } ^ { j } : 0 , v _ { i } ^ { j + 1 \mathrm { m o d } 3 } : 1 , t _ { i } ^ { j } : - 1 \} )$ . The function $\mathsf { e q } : \mathsf { R o w s } \times \mathsf { K e y s } ^ { 3 } \to \{ \mathsf { t r u e } , \mathsf { f a l s e } \}$ is described in Equation 6 and assumes that Rows contains two distinct values 0 and 1 and that there is a predicate $\mathsf { v a l } : \mathsf { R o w s } \to \{ 0 , 1 \}$ that returns the value of a variable in the database. Intuitively, for any key $r$ , if $a , b , c$ correspond to the three variables in a clause $C _ { i }$ (possibly permuted), whenever $\neg \mathsf { e q } ( r , a , b , c )$ holds, we deduce that the value assigned at key $a$ is 1 while on the other two keys the assigned value is 0. Moreover, whenever $r$ refers to any of the special keys such as $0 _ { i } , 1 _ { i }$ or $t _ { i } ^ { j }$ , the predicate $\mathsf { e q } ( r , a , b , c )$ always holds. $$ \mathsf { e q } ( r , a , b , c ) = \left\{ \begin{array} { l l } { \mathrm { v a l } ( r ) \neq 1 \mathrm { i f } \mathsf { k e y } ( r ) = a } \\ { \mathrm { v a l } ( r ) \neq 0 \mathrm { i f } \mathsf { k e y } ( r ) = b \lor \mathsf { k e y } ( r ) = c } \\ { \mathrm { t r u e } \qquad \mathrm { i f } \mathsf { k e y } ( r ) \in \{ t _ { i } ^ { j } \mid 1 \leq i \leq n , 0 \leq j \leq 2 \} } \\ { \mathrm { t r u e } \qquad \mathrm { i f } \mathsf { k e y } ( r ) \in \{ 1 _ { i } , 0 _ { i } \mid 1 \leq i \leq m \} } \\ { \mathsf { f a l s e } \qquad \quad \mathrm { o t h e r w i s e } } \end{array} \right. $$ Finally, we add an initial transaction that writes on every key the value 1. For that, we assume that Keys contains only one key per variable used in $\varphi$ as well as one key per aforementioned transaction. We denote by $T$ the set of all described transactions as well as by $\tt r o u n d ( { t } )$ to the round a transaction $t \in T$ belongs to. We describe the session order in the history $h _ { \varphi }$ using an auxiliary relation $\overline { { \mathsf { S O } } }$ . We establish that $( 1 _ { i } , 1 _ { j } ) , ( 0 _ { i } , 0 _ { j } ) \in \overline { { \mathsf { s o } } }$ for any pair of indices $i , j , 1 \leq i < j \leq m$ . We also enforce that $( t _ { i } ^ { j } , t _ { i + 1 } ^ { j } ) \in \overline { { \mathsf { S o } } }$ , for every $1 \leq i \leq n , 0 \leq j , j ^ { \prime } \leq 2$ . Finally, we connect round 0 with round 1 by enforcing that $\left( 1 _ { m } , t _ { 1 } ^ { 0 } \right) \in \overline { { \mathsf { s o } } }$ and $( 0 _ { m } , t _ { 1 } ^ { 1 } ) \in \overline { { \mathsf { s o } } }$ . Then, we denote by so to the transitive closure of $\overline { { \mathsf { S O } } }$ . Note that so is a union of disjoint total orders, so it is acyclic. For describing the write-read relation, we distinguish between two cases: keys associated to variables in $\varphi$ or to a transaction in $T$ . On one hand, for every key $x _ { i } , 1 \leq i \leq m$ , we define $\mathsf { w r } _ { x _ { i } } = \emptyset$ . On the other hand, for every key $x$ associated to a transaction $t _ { x }$ and every read event $r$ in a transaction $t$ , we impose that $( t _ { x } , r ) \in \mathsf { w r } _ { x }$ if $\tt r o u n d ( \it t _ { x } ) < \tt r o u n d ( \it t )$ while otherwise we declare that $( \mathrm { i n i t } , r ) \in \mathsf { w r } _ { x }$ . Then, we denote by $\begin{array} { r } { \mathsf { w r } = \bigcup _ { x \in \mathsf { K e y s } } \mathsf { w r } _ { x } } \end{array}$ as well as by $h _ { \varphi }$ to the tuple $h _ { \varphi } = ( T , \mathsf { s o } , \mathsf { w r } )$ . A full depiction of $h _ { \varphi }$ can be found in Figure B.1. We observe that imposing $\mathsf { w r } _ { x } = \emptyset$ on every key $x \in \operatorname { v a r } ( \varphi )$ ensures that, for any witness of $h _ { \varphi }$ , $\overline { { h } } = ( T , { \sf s o } , \overline { { \sf w r } } )$ , if $( w , r ) \in \overline { { \mathsf { w r } } }$ , then W $\mathtt { H E R E } ( r ) ( \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( w , x ) ) =$ $0$ . In particular, this implies that each transaction $t _ { i } ^ { j }$ must read key $\boldsymbol { v } _ { i } ^ { \jmath }$ from a transaction that writes 1 as value while it also must read keys vij+1 mod 3 and vij+2 mod 3 from a transaction that writes 0 as value. Intuitively, this property shows that $\varphi$ is well-encoded in $h _ { \varphi }$ . The proof is divided in four steps: Lemma 11 proves that the $h _ { \varphi }$ is a polynomial-size transformation of $\varphi$ , Lemma 12 proves that the $h _ { \varphi }$ is indeed a history and Lemmas 13 and 17 prove that $h _ { \varphi }$ is consistent if and only if $\varphi$ is 1-in-3 satisfiable. Lemma 11. $h _ { \varphi }$ is a polynomial size transformation on the length of $\varphi$ Fig. B.1: Description of the history $h _ { \varphi }$ from Theorem 3. Dashed edges only belong to a possible consistent witness of $h _ { \varphi }$ , where we assume $v _ { 1 } ^ { 0 } = x _ { k }$ . Transaction $t _ { 1 } ^ { 0 }$ reads $v _ { 1 } ^ { 0 } , v _ { 1 } ^ { 1 }$ and $v _ { 1 } ^ { 2 }$ from round 0; imposing some constraints on the transactions that write them. Due to axiom $\mathtt { R C }$ ’s definition, transaction $t _ { 1 } ^ { 1 }$ must read $v _ { 1 } ^ { 1 }$ from $t _ { 1 } ^ { 0 }$ while transaction $t _ { 1 } ^ { 2 }$ must read $v _ { 1 } ^ { 1 }$ from $t _ { 1 } ^ { 1 }$ . Proof. If $\varphi$ has $n$ clauses and $m$ variables, $h _ { \varphi }$ employs $6 n \substack { + 2 m + 1 }$ transactions. As $m \leq 3 n$ , $| T | \in { \mathcal { O } } ( n )$ . The number of variables, $| \mathsf { K e y s } | = m + | T |$ , so $| \mathsf { K e y s } | \in { \mathcal { O } } ( n )$ . As every transaction has at most two events, $| \mathsf { e v e n t s } ( h _ { \varphi } ) | \in \mathcal { O } ( n )$ . Moreover, wr $\subseteq { \mathsf { K e y s } } \times T \times T$ and ${ \mathsf { s o } } \subseteq T \times T$ , so $| \mathsf { w r } | \in \mathcal { O } ( n ^ { 3 } )$ and $| \mathsf { s o } | \in \mathcal { O } ( n ^ { 2 } )$ . Thus, $h _ { \varphi }$ is a polynomial transformation of $\varphi$ . $\sqsubseteq$ For proving that $h _ { \varphi }$ is a history, by Definition 1 it suffices to prove that so ∪ wr is an acyclic relation. Indeed, by our choice of wr, for every key $x$ , $\mathsf { w r } _ { x } ^ { - 1 }$ is a partial function that, whenever it is defined, associates reads to writes on $x$ . Hence, from Lemma 12 we conclude that $h _ { \varphi }$ is a history. Lemma 12. The relation so wr is acyclic. Proof. For proving that so wr is acyclic, we reason by induction on the number of clauses. In particular, we show that for every pair of transactions $t , t ^ { \prime }$ if $\mathtt { r o u n d } ( t ^ { \prime } ) \le i$ and $( t , t ^ { \prime } ) \in \mathsf { s o } \cup \mathsf { w } \mathsf { t }$ , then $\operatorname { r o u n d } ( t ) \leq i$ and $( t ^ { \prime } , t ) \notin \mathsf { s o } \cup \mathsf { w r }$ . Base case: The base case refers to round 0; which contains init and transactions $0 _ { j } , 1 _ { j } , 1 \le j \le m$ . We observe that transactions in round 0 do not contain any read event. Hence, $( t , t ^ { \prime } ) \in \mathsf { s o }$ . In such case, the result immediately holds by construction of so. Inductive case: Let us suppose that the induction hypothesis holds for every $1 \leq i \leq k \leq n$ and let us prove it also for $k + 1 \leq n$ . If $\tt r o u n d ( t ^ { \prime } ) < \boldsymbol { k } + 1$ , $\mathtt { r o u n d } ( t ^ { \prime } ) \le k$ and the result holds by induction hypothesis; so we can assume without loss of generality that $\mathtt { r o u n d } ( t ^ { \prime } ) = k + 1$ . By construction of both so and wr, if $( t , t ^ { \prime } ) \in { \mathsf { s o } } \cup { \mathsf { w r } }$ , $\tt r o u n d ( t ) < \tt r o u n d ( t ^ { \prime } )$ . Hence, $\mathtt { r o u n d } ( t ) \le k$ . By induction hypothesis on $t$ , if $( t ^ { \prime } , t ) \in \mathsf { s o } \cup \mathsf { w r }$ , $\mathtt { r o u n d } ( t ^ { \prime } ) \le k < k + 1 =$ $\tt r o u n d ( { t ^ { \prime } } )$ ; which is impossible. Thus, we conclude that $( t ^ { \prime } , t ) \notin \mathsf { s o } \cup \mathsf { \Gamma }$ wr. Lemma 13. If $\varphi$ is 1-in-3 satisfiable then $h _ { \varphi }$ is consistent. Proof. Let $\alpha : \mathsf { K e y s } \to \{ 0 , 1 \}$ an assignment that makes 1-in-3 satisfiable. To construct a witness of $h _ { \varphi }$ we define a write-read relation wr that extends wr and a total order on its transactions. For that, we first define a total order co between the transactions in $T$ . In Equation 6 we define two auxiliary relations $\hat { \mathbf { r } }$ and $\widehat { \sf b }$ based on $\alpha$ that totally orders the transactions that belongs to the same round. For every clause $C _ { i } , 1 \le i \le n$ let $j _ { i }$ be the unique index s.t. $\alpha ( v _ { i } ^ { j _ { i } } ) = 1$ . Such index allow us to order the transactions in the round $\textit { \textbf { \ i } }$ : $t _ { i } ^ { j _ { i } }$ preceding $t _ { i } ^ { j _ { i } + 1 }$ mod 3 while tji+1 mod 3 preceding $t _ { i } ^ { j _ { i } + 2 }$ mod 3. Intuitively, $t _ { i } ^ { j _ { i } }$ must precede the other two transactions in the total order as $v _ { i } ^ { j }$ is the variable that is satisfied. Then, we connect every pair of consecutive rounds thanks to relation $\hat { \mathsf { c } } _ { 1 }$ . For transactions in round 0, we enforce that transactions associated to the same variable are totally ordered using $\alpha$ . In particular, for every $i , 1 \le i \le m$ , $0 _ { i }$ precedes ${ { \mathrm { 1 } } _ { i } }$ in $\widehat { \sf b }$ if and only if $\alpha ( v _ { i } ) = 1$ . Then, we connect every pair tuple in $\hat { \sf b }$ with relation $\hat { \mathsf { c } } _ { 2 }$ . Finally, we connect init with transactions in round $0$ as well as round 0 with round 1 thanks to relation ${ \hat { \mathsf { c } } } _ { 3 }$ . $$ \begin{array} { c } { { \hat { \mathbf { r } } = \{ \begin{array} { l l } { { ( t _ { i } ^ { j _ { i } } , t _ { i } ^ { j _ { i } + 1 } \mathrm { m o d } 3 ) } } \\ { { ( t _ { i } ^ { j _ { i } + 1 } \mathrm { m o d } 3 , t _ { i } ^ { j _ { i } + 2 } \mathrm { m o d } 3 ) } } \end{array} | \overset { 1 \leq i \leq n , 0 \leq j _ { i } \leq 2 } { \alpha ( v _ { i } ^ { j _ { i } } ) } \} } } \\ { { \hat { \mathbf { b } } = \{ ( 0 _ { i } , 1 _ { i } ) \ | \ x _ { i } \in { \mathsf { K e y s } } \wedge \alpha ( x _ { i } ) = 1 \} \cup \{ ( 1 _ { i } , 0 _ { i } ) \ | \ x _ { i } \in { \mathsf { K e y s } } \wedge \alpha ( x _ { i } ) = 0 \} } } \\ { { \hat { \mathbf { c } } _ { 1 } = \{ ( t _ { i } ^ { j _ { i } + 2 } \mathrm { m o d } 3 , t _ { i + 1 } ^ { j _ { i + 1 } } ) \ | \ 1 \leq i < n , 0 \leq j _ { i } , j _ { i + 1 } \leq 2 , \alpha ( v _ { i } ^ { j _ { i } } ) = 1 = \alpha ( v _ { i } ^ { j _ { i + 1 } } ) \} } } \\ { { \hat { \mathbf { c } } _ { 2 } = \{ ( 1 _ { i } , 0 _ { j } ) , ( 1 _ { i } , 1 _ { j } ) , ( 0 _ { i } , 0 _ { j } ) , ( 0 _ { i } , 1 _ { j } ) \ | \ 1 \leq i < j \leq m \} } } \\ { \hat { \mathbf { c } } _ { 3 } = \{ ( \mathrm { i n i t } , 0 _ { 1 } ) , ( \mathrm { i n i t } , 1 _ { 1 } ) \} \cup \{ ( 1 _ { m } , t _ { 1 } ^ { j _ { 1 } } ) , ( 0 _ { m } , t _ { 1 } ^ { j _ { 1 } } ) \ | \ 0 \leq j _ { 1 } \leq 2 , \alpha ( v _ { i } ^ { j _ { 1 } } ) = 1 \} } \end{array} $$ Let $\mathsf { c o } = ( \hat { \mathbf { r } } \cup \hat { \mathsf { c } } _ { 1 } \cup \hat { \mathsf { b } } \cup \hat { \mathsf { c } } _ { 2 } \cup \hat { \mathsf { c } } _ { 3 } ) ^ { + }$ . The proof of Lemma 13 concludes thanks to Lemmas 14 and 15, Proposition 1 and Corollary 1. First, Lemma 14 proves that the relation co is a total order between transactions. Then, Lemma 15 shows that co allow us define $\bar { h }$ , a witness of $h _ { \varphi }$ . And finally, with the aid of Proposition 1 and Corollary 1 we conclude that $\bar { h }$ is consistent; so it is a consistent witness of $h _ { \varphi }$ . Lemma 14. The relation co is a total order. Proof. For proving that co is a total order, we show by induction that if $( t , t ^ { \prime } ) \in$ co and $\operatorname { r o u n d } ( t ^ { \prime } ) \leq i$ , then round $\mathbf { \Psi } ( t ) \leq i$ and $( t ^ { \prime } , t ) \notin \mathrm { c o }$ . – Base case: We observe that by construction of co, $t ^ { \prime } \neq$ init. We prove the base case by a second induction that if there exists $i ^ { \prime } , 1 \le i ^ { \prime } \le m$ s.t. $t ^ { \prime } \in \{ 0 _ { i ^ { \prime } } , 1 _ { i ^ { \prime } } \}$ and $( t , t ^ { \prime } ) \in \mathsf { c o }$ then either $t = { \mathrm { i n i t } }$ or there exists $i \leq i ^ { \prime }$ s.t. $t \in \{ 0 _ { i } , 1 _ { i } \}$ and $( t ^ { \prime } , t ) \notin \mathrm { c o }$ . Base case: Let us suppose that $\alpha ( x _ { 0 } ) = 1$ as the other case is symmetric. If $t = { \mathrm { i n i t } }$ , $( t ^ { \prime } , t ) \notin \mathsf { c o }$ as init is minimal in co. If not, then $t ^ { \prime } = 1 _ { 1 }$ and $t = 0 _ { 1 }$ . We conclude once more that $( t , t ^ { \prime } ) \notin { \mathsf { c o } }$ as $0 _ { 1 }$ only have init as a co-predecessor; which is co-minimal. Induction hypothesis: Let us suppose that the induction hypothesis holds for every $\overline { { 1 \le i \le k } } \le m$ and let us prove it also for $k + 1 \leq m$ . If $i ^ { \prime } < k$ , we conclude the result by induction hypothesis; so we can assume that $i ^ { \prime } = k$ . Moreover, as init is co-minimal, we can assume without loss of generality that $t \neq$ init. Thus, by construction of co, there must exists $i , 1 \le i \le m$ s.t. $t \in \{ 0 _ { i } , 1 _ { i } \}$ . In particular, $i \le i ^ { \prime }$ . Thus, if $i < i ^ { \prime }$ and $( t ^ { \prime } , t )$ would be in co, by induction hypothesis on $t$ we would deduce that $i ^ { \prime } \leq i < i ^ { \prime }$ ; which is impossible. Hence, we can assume that $\overset { \cdot } { \boldsymbol { i } ^ { \prime } } = \overset { \cdot } { \boldsymbol { i } ^ { \prime } }$ . Let us assume that $\alpha ( x _ { i } ) = 1$ as the other case is symmetric. Thus, we deduce that $t = 0 _ { i }$ and $t ^ { \prime } = 1 _ { i }$ . We observe that $( t ^ { \prime } , t ) \notin \hat { \textbf { r } } \cup \hat { \mathsf { c } } _ { 1 } \cup \hat { \mathsf { b } } \cup \hat { \mathsf { c } } _ { 2 } \cup \hat { \mathsf { c } } _ { 3 }$ . As $T$ is finite, if $( t ^ { \prime } , t ) \in \mathsf { c o }$ , there would exist a transaction $t ^ { \prime \prime } \ne t ^ { \prime }$ s.t $( t ^ { \prime \prime } , t ) \in \hat { \mathbf { r } } \cup \hat { \mathbf { c } } _ { 1 } \cup \hat { \mathbf { b } } \cup \hat { \mathbf { c } } _ { 2 } \cup \hat { \mathbf { c } } _ { 3 }$ and $( t ^ { \prime } , t ^ { \prime \prime } ) \in \mathsf { c o }$ . But in such case, either $t ^ { \prime \prime } = \mathrm { i n i t }$ or there would exists an integer $i ^ { \prime \prime } , 1 \le i ^ { \prime \prime } < i \le m$ s.t. $t ^ { \prime \prime } \in \{ 0 _ { i ^ { \prime \prime } } , 1 _ { i ^ { \prime \prime } } \}$ ; which is impossible by induction hypothesis. In conclusion, $( t ^ { \prime } , t ) \notin \mathrm { c o }$ . – Inductive case: Let us suppose that the induction hypothesis holds for every $1 \leq i \leq k \leq n$ and let us prove it also for $k + 1 \leq n$ . Let thus $t , t ^ { \prime }$ transactions s.t. $\tt r o u n d ( { t } ) ^ { \prime } \leq k + 1$ and $( t , t ^ { \prime } ) \in \mathsf { c o }$ . If $\mathtt { r o u n d } ( t ^ { \prime } ) < k + 1$ , $\mathtt { r o u n d } ( t ^ { \prime } ) \le k$ and the result holds by induction hypothesis; so we can assume without loss of generality that $\mathtt { r o u n d } ( t ^ { \prime } ) = k + 1$ . By construction of co, $\mathtt { r o u n d } ( t ) \le k + 1$ . If $\operatorname { r o u n d } ( t ) \leq k$ and $( t ^ { \prime } , t ) \in \mathsf { c o }$ , by induction hypothesis on $t$ we obtain that $\operatorname { r o u n d } ( t ^ { \prime } ) \leq k < k + 1 = \operatorname { r o u n d } ( t ^ { \prime } )$ ; which is impossible. Thus, we can also assume without loss of generality that $\mathtt { r o u n d } ( t ) = k + 1$ . In such case, we observe that $\hat { \mathsf { c } } _ { 1 }$ , $\widehat { \sf b }$ and $\hat { \mathsf { c } } _ { 1 }$ do not order transactions belonging to the same round. Hence if $( t , t ^ { \prime } ) \in \mathsf { c o }$ and $( t ^ { \prime } , t ) \in \mathsf { c o }$ , we deduce that $( t , t ^ { \prime } ) \in \hat { \mathbf { r } }$ and $( t ^ { \prime } , t ) \in \hat { \mathbf { r } }$ . However, by construction of $\hat { \mathbf { r } }$ , this is impossible, so we conclude once more that $( t ^ { \prime } , t ) \notin \mathrm { c o }$ . Next, we construct a full history $\overline { { h } }$ using co that extends $h _ { \varphi }$ . For every key $x$ and read event $r$ , we define $\boldsymbol { w _ { x } ^ { r } }$ as follows: $$ w _ { x } ^ { r } = \operatorname* { m a x } _ { \mathrm { c o } } \{ t \in T \mid t { \mathrm { ~ w r i t e s ~ } } x \land ( t , r ) \in \mathrm { c o } \} $$ Observe that $w _ { x } ^ { r }$ is well-defined as co is a total order and init write every key. For each key $x \in \operatorname { v a r } ( \varphi )$ , we define the relation $\overline { { \mathsf { W } } } \mathsf { r } _ { x } = \{ ( w _ { x } ^ { r } , r ) \mid r \in \mathsf { r e a d s } ( h ) \}$ . Then, we define the relation $\begin{array} { r } { \overline { { \mathsf { W } \mathsf { r } } } = \bigcup _ { x \in \mathsf { v a r } ( \varphi ) } \overline { { \mathsf { W } \mathsf { r } } } _ { x } \cup } \end{array}$ wr as well as the history $\overline { { h } } = ( T , { \varsigma } _ { 0 } , \overline { { \mathsf { w r } } } )$ . Lemma 15 proves that $\bar { h }$ is indeed a full history while Lemma 16 shows that $\overline { { h } }$ is a witness of $h _ { \varphi }$ . # Lemma 15. $\overline { { h } }$ is a full history. Proof. For showing that $\overline { { h } }$ is a full history it suffices to show that $\mathsf { s o U } \overline { { \mathsf { w r } } }$ is acyclic. As co is a total order and $\overline { { \mathsf { W } \mathsf { r } } } \setminus$ wr $\subseteq \mathsf { c o }$ , proving that so ∪ wr $\subseteq \mathsf { c o }$ concludes the result. First, we prove that ${ \mathsf { s o } } \subseteq { \mathsf { c o } }$ . Let $t , t ^ { \prime }$ be transactions s.t. $( t , t ^ { \prime } ) \in \mathsf { s o }$ . In such case, $\mathtt { r o u n d } ( t ) \le \mathtt { r o u n d } ( t ^ { \prime } )$ ; and they only coincide if $\mathtt { r o u n d } ( t ) = \mathtt { r o u n d } ( t ^ { \prime } ) = 0$ . Three cases arise: $- { \mathrm { ~ \ r o u n d } } ( t ) = { \mathrm { ~ \ r o u n d } } ( t ^ { \prime } ) = 0$ : As $( t , t ^ { \prime } ) \in \hat { \mathsf { c } } _ { 2 }$ , we conclude that $( t , t ^ { \prime } ) \in \mathsf { c o }$ . $- \ \mathrm { r o u n d } ( t ) , \mathrm { r o u n d } ( t ^ { \prime } ) > 0 ;$ As $\mathbf { r o u n d } ( t ) , \mathbf { r o u n d } ( t ^ { \prime } ) \ > \ 0$ and $\tt r o u n d ( \it t ) \leq$ $\tt r o u n d ( { t ^ { \prime } } )$ , by construction of so we deduce that $\mathtt { r o u n d } ( t ) < \mathtt { r o u n d } ( t ^ { \prime } )$ . As co is transitive, we can assume without loss of generality that $\tt r o u n d ( { t ^ { \prime } } ) \ =$ $\mathtt { r o u n d ( } t ) + 1$ . Therefore, there exists $i , j , 1 ~ \le ~ i ~ < ~ n , 0 ~ \le ~ j ~ \le ~ 2$ s.t. $t ~ = ~ t _ { i } ^ { j }$ and $t ^ { \prime } = t _ { i + 1 } ^ { j }$ . Let $j _ { i } , j _ { i + 1 } , 0 \ \leq \ j _ { i } , j _ { i + 1 } \ \leq \ 2$ be the integers s.t. $\alpha ( v _ { i } ^ { j _ { i } } ) = 1 = \alpha ( v _ { i + 1 } ^ { j _ { i + 1 } } )$ . In such case, we know that $( t _ { i } ^ { j } , t _ { i } ^ { j _ { i } + 2 \mathrm { m o d } 3 } ) \in \hat { \mathbf { r } } ^ { * }$ , $( t _ { i } ^ { j _ { i } + 2 \mathrm { ~ m o d ~ } 3 } , t _ { i + 1 } ^ { j _ { i } } ) \in \hat { \mathsf { c } } _ { 1 }$ and $( t _ { i + 1 } ^ { j _ { i + 1 } } , t _ { i + 1 } ^ { j + 1 } ) \in \hat { \textbf { r } } ^ { * }$ . Hence, as co is transitive, $( t , t ^ { \prime } ) \in \mathsf { c o }$ . $- \ \mathtt { r o u n d } ( t ) = 0$ , $\mathtt { r o u n d } ( t ^ { \prime } ) > 0$ : In this case, as $\mathtt { r o u n d } ( t ) = 0$ , there exists $i , 1 \leq$ ${ \overline { { i \ \leq \ m } } }$ s.t. $x _ { i } \in \mathsf { v a r } ( \varphi ) , t \in \{ 0 _ { i } , 1 _ { i } \}$ . We assume without loss of generality that $t = 1 _ { i }$ as the other case is symmetric. In addition, as $\tt r o u n d ( { t ^ { \prime } } ) > 0$ and $( t , t ^ { \prime } ) \in \mathsf { s o }$ , there exists $i , 1 \le i \le n$ s.t. $t ^ { \prime } = t _ { i } ^ { 0 }$ . We rely on the two previous proven cases to deduce the result: as $( 0 _ { i } , 0 _ { m } ) \in \mathsf { s o } \subseteq \mathsf { c o }$ , $( 0 _ { m } , t _ { 0 } ^ { j _ { 0 } } ) \ \in \ \hat { \mathsf { c } } _ { 3 }$ , $( t _ { 0 } ^ { j _ { 0 } } , t _ { 0 } ^ { 0 } ) \in \hat { \mathbf { r } } ^ { * }$ and $( t _ { 0 } ^ { 0 } , t _ { i } ^ { 0 } ) \in { \sf s o } \subseteq { \sf c o }$ , we conclude that $( t , t ^ { \prime } ) \in \mathsf { c o }$ . Next, we prove that wr $\subseteq \mathsf { c o }$ . Let $r$ be a read event and $w$ be a write event s.t. $( w , r ) \in \mathsf { w r }$ . Then, there exists $i , i ^ { \prime } , 1 \leq i < i ^ { \prime } \leq n$ and $j , j ^ { \prime } , 0 \le j , j ^ { \prime } \le 2$ s.t. $w =$ $t _ { i } ^ { j }$ and $\mathsf { t r } ( r ) = t _ { i ^ { \prime } } ^ { j ^ { \prime } }$ . Let $j _ { i ^ { \prime } - 1 } , j _ { i ^ { \prime } } , 0 \leq j _ { i ^ { \prime } - 1 } , j _ { i ^ { \prime } } \leq 2$ be the integers s.t. $\alpha ( v _ { i ^ { \prime } - 1 } ^ { j _ { i ^ { \prime } - 1 } } ) =$ $1 = \alpha ( v _ { i ^ { \prime } } ^ { j ^ { \prime } } )$ . In such case, we know that $( t _ { i } ^ { j } , t _ { i ^ { \prime } - 1 } ^ { j } ) \in \mathsf { s o } ^ { * }$ , $( t _ { i ^ { \prime } - 1 } ^ { j } , t _ { i ^ { \prime } - 1 } ^ { j _ { i ^ { \prime } } + 2 \mathrm { ~ m o d ~ } 3 } ) \in \hat { \mathbf { r } } ^ { * }$ $( t _ { i ^ { \prime } - 1 } ^ { j _ { i ^ { \prime } } + 2 \mathrm { ~ m o d ~ } 3 } , t _ { i ^ { \prime } } ^ { j _ { i } } ) \in \hat { \mathsf { c } } _ { 1 }$ and $( t _ { i ^ { \prime } } ^ { j _ { i ^ { \prime } } } , t _ { i ^ { \prime } } ^ { j ^ { \prime } } ) \in \hat { \mathbf { r } } ^ { * }$ . As− ${ \mathsf { s o } } \subseteq { \mathsf { c o } }$ and− $\mathsf { C O }$ is−transitive, we conclude that $( w , r ) \in \mathsf { c o }$ . □ We show that $\bar { h }$ is indeed a full history, that is a witness of $h _ { \varphi }$ and that also witness $h _ { \varphi }$ ’s consistency. Lemma 16. The history $\overline { { h } }$ is a witness of $h _ { \varphi }$ . Proof. By Lemma 15 $\overline { { h } }$ is a full history. Hence, for proving that $\bar { h }$ is a witness of $h _ { \varphi }$ , we need to show that for every key $x \in { \mathsf { K e y s } }$ and every read $r$ , if $\mathsf { w r } _ { x } ^ { - 1 } ( r ) \mid$ , $\mathtt { W H E R E } ( r ) ( \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( w _ { x } ^ { r } , x ) ) = 0$ . Note that by construction of $h _ { \varphi }$ , such cases coincide with $x \in \operatorname { v a r } ( \varphi )$ . In addition, we observe that if $r$ is a read event, there exists indices $1 \leq i \leq n , 0 \leq j \leq 2$ s.t. $\boldsymbol { r } \in t _ { i } ^ { j }$ . Thus, by Equation 6, we only need to prove that WHERE $\mathsf { \iota } ( r ) ( \mathsf { v a l u e } _ { \mathsf { w r } } ( w _ { x } ^ { r } , x ) ) = 0$ whenever $x$ is $v _ { i } ^ { 0 } , v _ { i } ^ { 1 }$ or $v _ { i } ^ { 2 }$ . We prove as an intermediate step that in each round, every key has the same value in $\bar { h }$ . For every round $i$ and key $x \in \operatorname { v a r } ( \varphi )$ , we consider the transaction $t _ { x } ^ { i } = \operatorname* { m a x } _ { \mathrm { c o } } \{ t \mid t$ writes $x \land \operatorname { r o u n d } ( t ) \leq i \}$ . We prove by induction on the number of the round that $\mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { x } ^ { i } , x ) = \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { x } ^ { 0 } , x ) = ( x , \alpha ( x ) )$ . – Base case: The base case, $i = 0$ , is immediately trivial. Note that in this case $\mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { x } ^ { 0 } , x ) = \alpha ( x )$ . – Inductive case: Let us assume that $\mathtt { v a l u e } _ { \mathrm { w r } } ( t _ { x } ^ { i - 1 } , x ) = \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { x } ^ { 0 } , x )$ and let us prove that $\mathsf { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { x } ^ { i } , x ) = ( x , \alpha ( x ) )$ . Note that in round $i$ only keys $v _ { i } ^ { 0 } , v _ { i } ^ { 1 }$ and $v _ { i } ^ { 2 }$ are written; so for every other key $x$ , $t _ { x } ^ { i } = t _ { x } ^ { i - 1 }$ and by induction hypothesis, $\mathtt { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { x } ^ { i } , x ) = ( x , \alpha ( x ) )$ . Let thus $j , 0 \le j \le 2$ s.t. $\alpha ( v _ { i } ^ { j } ) = 1$ . In this case, $t _ { v _ { i } ^ { j } } ^ { i } = t _ { v _ { i } ^ { j + 2 } } ^ { i }$ tivj+2 mod 3 = tij+2 mod 3 and tivj+1 mod 3 = tij+1 mod 3. Hence, we can conclude the inductive step as: $$ \begin{array} { r l r } & { } & { \mathfrak { v a l u e } _ { \mathrm { w r } } ( t _ { i } ^ { j + 2 \bmod 3 } , v _ { i } ^ { j } ) = ( v _ { i } ^ { j } , 1 ) = ( v _ { i } ^ { j } , \alpha ( v _ { i } ^ { j } ) ) } \\ & { } & { \mathfrak { v a l u e } _ { \mathrm { w r } } ( t _ { i } ^ { j + 1 \bmod 3 } , v _ { i } ^ { j + 1 \bmod 3 } ) = ( v _ { i } ^ { j + 1 \bmod 3 } , 0 ) = ( v _ { i } ^ { j + 1 \bmod 3 } , \alpha ( v _ { i } ^ { j + 1 \bmod 3 } ) ) } \\ & { } & { \mathfrak { v a l u e } _ { \mathrm { w r } } ( t _ { i } ^ { j + 2 \bmod 3 } , v _ { i } ^ { j + 2 \bmod 3 } ) = ( v _ { i } ^ { j + 2 \bmod 3 } , 0 ) = ( v _ { i } ^ { j + 2 \bmod 3 } , \alpha ( v _ { i } ^ { j + 2 \bmod 3 } ) ) } \end{array} $$ We can thus conclude that $\bar { h }$ is a witness of $h _ { \varphi }$ . Let $i , j , 1 \le i \le n , 0 \le j \le 2$ be indices s.t. $\alpha ( v _ { i } ^ { j } ) = 1$ . For simplifying notation, we denote by $t _ { 0 } , t _ { 1 } , t _ { 2 }$ to the transactions $t _ { i } ^ { j } , t _ { i } ^ { j + 1 }$ mod 3 and $t _ { i } ^ { j + 2 }$ mod 3 respectively. We also denote by $r _ { 0 } , r _ { 1 } , r _ { 2 }$ to the read events that belong to $t _ { 0 } , t _ { 1 }$ and $t _ { 2 }$ respectively as well by $v _ { 0 } , v _ { 1 } , v _ { 2 }$ to the keys associated to $t _ { 0 } , t _ { 1 }$ and $t _ { 2 }$ respectively. For every key $x \neq v _ { 0 } , v _ { 1 } , v _ { 2 }$ and for every transaction $t$ that writes $x$ , $\mathtt { W H E R E } ( r _ { j } ) ( \mathtt { v a l u e } _ { \mathsf { w r } } ( t , x ) ) = 0 , 0 \le j \le 2$ ; so we can focus only on keys $v _ { 0 } , v _ { 1 }$ and $\boldsymbol { v } _ { 2 }$ . Three cases arise: – Transaction $t _ { 0 }$ : Let thus $x$ be a key in $\{ v _ { 0 } , v _ { 1 } , v _ { 2 } \}$ . By construction of $h _ { \varphi }$ and co, $t _ { 0 }$ reads $x$ from $t _ { x } ^ { i - 1 }$ . As proved before, $\mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { x } ^ { i - 1 } , x ) = ( x , \alpha ( x ) )$ and $\alpha ( x ) = ( x , 1 )$ if and only if $x = v _ { 0 }$ . Hence, as $\begin{array} { r } { \eta \mathrm { H E R E } ( r _ { 0 } ) ( \mathrm { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { x } ^ { i - 1 } , x ) ) = 0 } \end{array}$ we conclude that $\mathtt { W H E R E } ( r _ { 0 } ) ( \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( w _ { x } ^ { r _ { 0 } } , x ) ) = 0$ . Transaction $t _ { 1 }$ : In this case, $t _ { 1 }$ reads $v _ { 2 }$ from $t _ { v _ { 2 } } ^ { i - 1 }$ and it reads $v _ { 0 }$ and $v _ { 1 }$ from $t _ { 0 }$ . On one hand, $\mathsf { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { v _ { 2 } } ^ { i - 1 } , v _ { 2 } ) = \mathsf { \Gamma } ( v _ { 2 } , \alpha ( v _ { 2 } ) ) = \mathsf { \Gamma } ( v _ { 2 } , 0 )$ . Thus, as $\mathsf { W H E R E } ( r _ { 1 } ) ( t _ { v _ { 2 } } ^ { i - 1 } ) ~ = ~ 0$ , we conclude that $\mathsf { N H E R E } ( r _ { 1 } ) ( \mathsf { v a l u e } _ { \overline { { \mathsf { W r } } } } ( w _ { v _ { 2 } } ^ { r _ { 1 } } , v _ { 2 } ) ) \ : = \ : 0$ . On the other hand, by construction of $h _ { \varphi }$ , $\mathtt { W H E R E } ( r _ { 1 } ) ( \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { 0 } , v _ { 0 } ) ) \ =$ $\mathsf { W H E R E } ( r _ { 1 } ) ( \mathsf { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { 0 } , v _ { 1 } ) ) = 0$ . Thus, the result hold. – Transaction $t _ { 2 }$ : In this case, $t _ { 2 }$ read $v _ { 0 }$ from $t _ { 0 }$ and $v _ { 1 }$ and $v _ { 2 }$ from $t _ { 1 }$ . By construction of $h _ { \varphi }$ both ${ \mathsf { W H E R E } } ( r _ { 2 } ) ( { \mathsf { v a l u e } } _ { \overline { { \mathrm { w r } } } } ( t _ { 0 } , v _ { 0 } ) )$ , $\mathtt { W H E R E } ( r _ { 2 } ) ( \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { 1 } , v _ { 1 } ) )$ and $\mathtt { W H E R E } ( r _ { 2 } ) ( \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { 1 } , v _ { 2 } ) )$ are equal to $0$ ; so we conclude the result. We conclude the proof showing that the execution $\xi = ( { \overline { { h } } } , \mathsf { c o } )$ is a consistent execution of $h _ { \varphi }$ . We observe that by construction of $\overline { { \mathsf { W } \mathsf { r } } }$ and co, $\bar { h }$ satisfies SER using co. Corollary 1 proves that $\mathfrak { i } \mathfrak { s o } ( h _ { \varphi } )$ is weaker than SER; which allow us to conclude that so $\overline { { h } }$ satisfies $\mathfrak { i } \mathfrak { s o } ( h _ { \varphi } )$ . In other words, that $\overline { { h } }$ is consistent. Proposition 1. Let $h = ( T , { \sf s o } , { \sf w r } )$ be a full history, $\xi = ( h , \mathsf { c o } )$ be an execution of $h$ , $r$ be a read event, $t _ { 2 }$ be $a$ transaction distinct from $\mathsf { t r } ( r )$ , $x$ be a key and $\mathsf { v } \in \mathsf { v i s } ( \mathsf { i s o } ( h ) ( \mathsf { t r } ( e ) ) )$ . If $\mathsf { v } ( t _ { 2 } , r , x )$ holds in $\xi$ , then $( t _ { 2 } , \mathsf { t r } ( r ) ) \in \mathsf { c o }$ . Proof. The proposition is result of an immediate induction on the definition of v. The base case is po, so, wr $\subseteq \mathsf { c o }$ , which holds by definition. The inductive case follows from the operators employed in Equation 3: union, composition and transitive closure of relations; which are monotonic. □ As a consequence of Proposition 1 and Serializability axiom definition, we obtain the following result. Corollary 1. Any isolation level is weaker than SER. Lemma 17. If $h _ { \varphi }$ is consistent then $\varphi$ is 1-in-3 satisfiable. Proof. If $h _ { \varphi }$ is consistent, there exists a consistent witness of $\boldsymbol { h _ { \varphi } } \ \overline { { \boldsymbol { h } } } = ( T , \mathsf { s o } , \overline { { \mathsf { w r } } } )$ . As $\overline { { h } }$ is consistent and $\mathfrak { i } \mathfrak { s o } ( h )$ is stronger than RC, there exists a consistent execution of $\overline { { h } }$ , $\xi ~ = ~ ( \overline { { { h } } } , \mathsf { c o } )$ . Let $\alpha _ { h } \ : { \mathsf { v a r } } ( \varphi ) \ \to \ \{ 0 , 1 \}$ s.t. for every variable $v _ { j } , 1 \le j \le m$ , $\alpha _ { h } ( v _ { j } ) = 1$ if and only if $( 0 _ { j } , 1 _ { j } ) \in \mathsf { c o }$ . We show that $\varphi$ is 1-in-3 satisfiable using $\alpha$ . As an intermediate step, we prove that $\alpha _ { h }$ describes the value read by any transaction in $\overline { { h } }$ . For every $i , 0 ~ \le ~ i ~ \le ~ n$ and key $x ~ \in ~ \mathfrak { v a r } ( \varphi )$ , let $t _ { x } ^ { i } ~ = ~ \operatorname* { m a x } _ { \mathrm { c o } } \{ t ~ | ~ t$ writes $x \land \operatorname { r o u n d } ( t ) \ \leq \ i \}$ . We prove by induction that for every $i , 0 ~ \le ~ i ~ \le ~ n$ (1) $\mathsf { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { x } ^ { i } , x ) = ( x , \alpha ( x ) )$ , (2) for any read event $r$ from a transaction $t$ s.t. $\mathtt { r o u n d } ( t ) \le i$ , if $( w , r ) \in \overline { { \mathsf { w r } } } _ { x }$ , then $w$ coincides with $\operatorname* { m a x } _ { \operatorname { c o } } \{ t \in T \mid t$ writes $x \wedge ( t , \mathsf { t r } ( r ) ) \in \mathsf { c o } \}$ and (3) if $i > 0$ , $\alpha ( v _ { i } ^ { j } ) = 1$ if and only if $( t _ { i } ^ { j } , t _ { i } ^ { j + 1 \mathrm { ~ m o d ~ 3 ~ } } ) \in \mathsf { c o }$ and $\mathbf { \xi } _ { t _ { i } } ^ { t _ { i + 1 } }$ mod 3, $t _ { i } ^ { j + 2 \mathrm { ~ m o d ~ 3 } } ) \in \mathrm { { c o } }$ . – Base case: Let $j , 1 \le j \le m$ be the integer s.t. $\boldsymbol { x } = \boldsymbol { v } _ { j }$ . In such case, (1) holds as $t _ { x } ^ { 0 } = 1 _ { j }$ if and only if $\alpha ( v _ { j } ) = 1$ ; and in such case, $\mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { x } ^ { 0 } , v _ { j } ) =$ $( v _ { j } , \alpha ( v _ { j } ) )$ . Also (2) trivially holds as there is no read event in a transaction belonging to round 0. Finally, (3) also trivially holds as $i = 0$ . – Inductive case: We assume that (1), (2) and (3) hold for round $i - 1$ and let us prove it for round $i$ . Let $j$ the index of the co-minimal transaction among $t _ { i } ^ { 1 } , t _ { i } ^ { 2 } , t _ { i } ^ { 3 }$ . We denote by $t _ { 0 } , t _ { 1 } , t _ { 2 }$ to $t _ { i } ^ { j } , t _ { i } ^ { j + 1 }$ mod 3 and $t _ { i } ^ { j + 2 }$ mod 3 respectively, by $r _ { 0 } , r _ { 1 } , r _ { 2 }$ to the unique read event in $t _ { 0 } , t _ { 1 }$ and $t _ { 2 }$ respectively and by $v _ { 0 } , v _ { 1 }$ and $v _ { 2 }$ to the keys associated to $t _ { 0 } , t _ { 1 }$ and $t _ { 2 }$ respectively. Let thus $x \in \operatorname { v a r } ( \varphi )$ be a key, $t$ be a transaction among $t _ { 0 } , t _ { 1 } , t _ { 2 }$ and let $w _ { x } ^ { t }$ be a transaction s.t. $( w _ { x } ^ { t } , t ) \in \overline { { \mathsf { w r } } } _ { x }$ . As $\mathbf { r o u n d } ( t _ { x } ^ { i - 1 } ) < \mathbf { r o u n d } ( t ) , ( t _ { x } ^ { i - 1 } , t ) \in \mathbf { w r } _ { t _ { x } ^ { i - 1 } }$ . Hence, as $\overline { { h } }$ satisfies RC, either $w _ { x } ^ { t } = t _ { x } ^ { i - 1 }$ or $\mathtt { r o u n d } ( w _ { x } ^ { t } ) = i$ . First we prove (3) analyzing $t _ { 0 }$ . As $( t _ { 0 } , t _ { 1 } ) \in \mathrm { c o }$ and $( t _ { 0 } , t _ { 2 } ) \in \mathsf { c o }$ and ${ \overline { { \mathsf { W } } } } { \mathsf { Y } } \subseteq { \mathsf { C } } { \mathsf { O } }$ we deduce that $w _ { x } ^ { t } = t _ { x } ^ { i - 1 }$ . In such case, as (1) holds by induction hypothesis and WHER $\mathsf { E } ( r _ { 0 } ) ( \mathsf { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { x } ^ { i - 1 } , x ) ) = 0$ , we conclude that $\alpha ( x ) = 1$ if ${ \boldsymbol { x } } = { \boldsymbol { v } } _ { 0 }$ and $\alpha ( x ) = 0$ if $x = v _ { 1 } , v _ { 2 }$ . For proving (2) we analyze three cases depending on $t$ : • $t = t _ { 0 }$ : As proved before, if $t = t _ { 0 }$ , $w _ { x } ^ { t } = t _ { x } ^ { i - 1 }$ . By definition of $t _ { x } ^ { i - 1 }$ , (2) holds. $t = t _ { 1 }$ : As $t _ { 0 }$ only writes $v _ { 0 }$ and $v _ { 1 }$ and $( t _ { 1 } , t _ { 2 } ) \in \mathsf { c o }$ we deduce that for every key $x \ne v _ { 0 } , v _ { 1 }$ , $w _ { x } ^ { t _ { 1 } } = t _ { x } ^ { i - 1 }$ ; which immediately implies (2). As (3) holds for round $i$ , we know that $\alpha ( v _ { 0 } ) = 1$ and $\alpha ( v _ { 1 } ) = 0$ . Thus, if $x \ = \ v _ { 0 } , v _ { 1 }$ , $\mathtt { W H E R E } ( r _ { 2 } ) ( \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { x } ^ { i - 1 } , x ) ) = 1$ ; so $( t _ { x } ^ { i - 1 } , t _ { 1 } ) \ \notin \ \overline { { \mathsf { w r } } } _ { x }$ . In conclusion, $w _ { x } ^ { t _ { 1 } } = t _ { 0 }$ ; which implies (2) by definition of $t _ { 0 }$ . $t = t _ { 2 }$ : As $t _ { 0 } , t _ { 1 }$ only write $v _ { 0 } , v _ { 1 }$ and $v _ { 2 }$ we deduce that for every other key, $w _ { x } ^ { t _ { 2 } } = t _ { x } ^ { i - 1 }$ ; which implies (2). Otherwise, we analyze the three subcases arising: ∗ $x = v _ { 2 }$ : In this case, $t _ { 0 }$ does not write $v _ { 2 }$ ; so there is only two options left, $t _ { x } ^ { i - 1 }$ and $t _ { 1 }$ . As (3) holds for round $i$ , $\alpha ( v _ { 2 } ) = 0$ . Thus, as by induction hypothesis (2) holds for round $i - 1$ , $\mathsf { s a l u e } _ { \mathsf { w r } } ( t _ { v _ { 2 } } ^ { i - 1 } , v _ { 2 } ) =$ $( v _ { 2 } , 0 )$ and hence, $\mathtt { W H E R E } ( r _ { 2 } ) ( \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { v _ { 2 } } ^ { i - 1 } , v _ { 2 } ) ) = 1$ . Therefore, $w _ { v _ { 2 } } ^ { t _ { 2 } }$ must be $t _ { 1 }$ ; which implies (2). $* \ x = v _ { 0 }$ : Once again, there is only two possible options as $t _ { 1 }$ does not write $v _ { 0 }$ . As (3) holds for round $i$ , $\alpha ( v _ { 0 } ) = 1$ . Thus, as by induction hypothesis (2) holds for round $i - 1$ , $\mathsf { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { v _ { 0 } } ^ { i - 1 } , v _ { 0 } ) = ( v _ { 0 } , 1 )$ and hence, $\mathtt { W H E R E } ( r _ { 2 } ) ( \mathtt { v a l u e } _ { \mathtt { W } } ( t _ { v _ { 0 } } ^ { i - 1 } , v _ { 0 } ) ) = 1$ . Therefore, $w _ { v _ { 0 } } ^ { t _ { 2 } }$ must be $t _ { 0 }$ ; which implies (2). $* \ x = v _ { 1 }$ : We observe in this case that $\mathsf { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { 0 } , v _ { 1 } ) \ = \ ( x , 1 )$ ; so $\overline { { \mathsf { W H E R E } ( r _ { 2 } ) ( \mathsf { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { 0 } , v _ { 1 } ) ) } } = 1$ . Therefore, there is only two possible options, $t _ { 1 }$ and $t _ { x } ^ { i - 1 }$ . As $h$ satisfies $\mathtt { R C }$ and $( t _ { 1 } , t _ { 2 } ) ~ \in ~ \overline { { \mathsf { w r } } } _ { v _ { 2 } }$ , if $( t _ { x } ^ { i - 1 } , t _ { 2 } ) \ \in \ \overline { { \mathsf { w r } } } _ { v _ { 1 } }$ , we deduce that $( t _ { 1 } , t _ { x } ^ { i - 1 } ) ~ \in ~ \mathsf { c o }$ . However, as $\tt r o u n d ( t _ { x } ^ { i - 1 } ) < \tt r o u n d ( t _ { 1 } )$ , $( t _ { x } ^ { i - 1 } , t _ { 1 } ) \in \mathsf { w r } _ { t _ { x } ^ { i - 1 } }$ ; which is impossible as ${ \overline { { \mathsf { W r } } } } \subseteq \mathsf { C o }$ . Thus, we conclude that $w _ { v _ { 2 } } ^ { t _ { 2 } } = t _ { 1 }$ ; which implies (2). For proving (1) we observe that for every key $x \neq v _ { 0 } , v _ { 1 } , v _ { 2 }$ , $t _ { x } ^ { i } = t _ { x } ^ { i - 1 }$ and by induction hypothesis we conclude that $\mathtt { v a l u e } _ { \overline { { \mathsf { w r } } } } ( t _ { x } ^ { \iota } , x ) = ( x , \alpha ( x ) )$ . Moreover, as $( t _ { 0 } , t _ { 1 } ) \in \mathrm { c o }$ and $( t _ { 1 } , t _ { 2 } ) \in \mathsf { c o }$ , $t _ { v _ { 0 } } ^ { i } = t _ { v _ { 2 } } ^ { i } = t _ { 2 }$ and $t _ { v _ { 1 } } ^ { i } = t _ { 1 }$ . In addition, as (3) holds, $\alpha ( v _ { 0 } ) = 1$ and $\alpha ( v _ { 1 } ) = \alpha ( v _ { 2 } ) = 0$ . This allow us to conclude (1) also for the keys $v _ { 0 } , v _ { 1 }$ and $\boldsymbol { v } _ { 2 }$ ; so the inductive step is proven. After proving (1), (2) and (3) we can conclude that $\varphi$ is 1-in-3 satisfiable. For every round $i$ , we observe that by (1) $\mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { x } ^ { i } , x ) = ( x , \alpha ( x ) )$ . Moreover, as (2) holds, $( t _ { x } ^ { i } , t _ { 0 } ^ { i } ) \in \overline { { \mathsf { w r } } } _ { x }$ ; where $t _ { 0 } ^ { i }$ is the first transaction in co among the transactions in round $i$ . As $\bar { h }$ is a witness of $h _ { \varphi }$ , $\mathtt { W H E R E } ( r _ { 0 } ^ { i } ) ( \mathtt { v a l u e } _ { \overline { { \mathrm { w r } } } } ( t _ { x } ^ { i } , x ) ) = 0$ ; where $r _ { 0 } ^ { i }$ is the read event of $t _ { 0 } ^ { i }$ . Hence, exactly one variable among $v _ { i } ^ { 0 } , v _ { i } ^ { 1 }$ and $v _ { i } ^ { 2 }$ has $^ { 1 }$ as image by $\alpha$ . Therefore, $\varphi$ is 1-in-3 satisfiable. # B.3 Proof of Theorem 4 Theorem 4. Checking consistency of partial observation histories with bounded isolation configurations stronger than RC is NP-complete. The structure of the proof is divided in two parts: proving that the problem is NP and proving that it is NP-hard. The fist part, corresponding to Lemma 18, is straightforward as, for any client history, we simply guess a suitable witness and a total order on its transactions for deducing its consistency applying Definition 6. The second part, corresponding to Lemmas 24 and 25 is more complicated. We use a novel reduction from 3-SAT. We encode a boolean formula $\varphi$ in a history $h _ { \varphi }$ , s.t. $h _ { \varphi }$ is consistent iff $\varphi$ is satisfiable. We first prove that the problem is indeed in NP (Lemma 18). Lemma 18. The problem of checking consistency for a client history with an isolation configuration stronger than RC is in NP. Proof. Let $\textit { h } \ = \ ( T , 5 0 , \mathsf { w r } )$ a client history whose isolation configuration is stronger than RC. Guessing a witness of $h$ , $\bar { h }$ , and an execution of $h$ , $\xi = ( { \overline { { h } } } , \mathsf { c o } )$ , can be done in $\mathcal { O } ( | { \mathsf { K e y s } } | \cdot | { \mathsf { e v e n t s } } ( h ) | ^ { 2 } + | T | ^ { 2 } ) \subseteq \mathcal { O } ( | h | ^ { 3 } )$ . By Lemma 5, checking if $\xi$ is consistent is equivalent as checking if saturate $( \overline { { h } } , \subset \circ )$ is an acyclic relation. As by Lemma 7, Algorithm 1 requires polynomial time, we conclude the result. For showing NP-hardness, we reduce 3-SAT to the problem of checking consistency of a partial observation history. Note that the problem is NP-hard in the case where the isolation configuration is not saturable, as discussed in Section 4.2, using the results in [7]. Therefore, we only prove it for the case where the isolation configuration is saturable. Let $\varphi = { \textstyle \bigwedge } _ { i = 1 } ^ { n } C _ { i }$ a CNF expression with at most 3 literals per clause (i.e. $C _ { i } = l _ { i } ^ { 1 } \vee l _ { i } ^ { 2 } \vee l _ { i . } ^ { 3 }$ ). Without loss of generality we can assume that each clause contains exactly three literals and each literal in a clause refers to a different variable. We denote $\mathtt { v a r } ( l _ { i } ^ { j } )$ to the variable that the literal $l _ { i } ^ { j }$ employs and $\mathsf { V a r s } ( \varphi )$ the set of all variables of $\varphi$ . We design a history $h _ { \varphi }$ with an arbitrary saturable isolation configuration encoding $\varphi$ . Thus, checking $\varphi$ -satisfiability would reduce to checking $h _ { \varphi }$ ’s consistency. Note that as $\mathfrak { i } \mathfrak { s o } ( h _ { \varphi } )$ is saturable, $h _ { \varphi }$ ’s consistency is equivalent to checking pco’s acyclicity; where $\mathsf { p c o } = \mathrm { s A T U R A T E } \big ( h _ { \varphi } , ( \mathsf { s o U w r } ) ^ { + } \big )$ . We use the latter characterization of consistency for encoding the formula $\varphi$ in $h _ { \varphi }$ . First of all, we consider every literal in $\varphi$ independently. This means that even if two literals $l _ { i } ^ { j }$ and $l _ { i ^ { \prime } } ^ { j ^ { \prime } }$ share its variable ( $\mathtt { v a r } ( l _ { i } ^ { j } ) = \mathtt { v a r } ( l _ { i ^ { \prime } } ^ { j ^ { \prime } } ) )$ we will reason independently about them. For that, we employ keys $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ and $\mathtt { v a r } ( l _ { i ^ { \prime } } ^ { j ^ { \prime } } ) _ { i ^ { \prime } }$ . We later enforce additional constraints for ensuring that $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ and $\mathtt { v a r } ( l _ { i ^ { \prime } } ^ { j ^ { \prime } } ) _ { i ^ { \prime } }$ coordinate so assignments on $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ coincide with assignments in $\mathtt { v a r } ( l _ { i ^ { \prime } } ^ { j ^ { \prime } } ) _ { i ^ { \prime } }$ . For simplicity in the explanation, whenever we talk about a literal $\it { l }$ that is negated (for example $l : = \lnot x$ ), we denote by $\neg { }$ to the literal $x$ . Also, we use INSERT({var(li)(−i,1) : 1}) INSERT({var(li)(+i,1) : 1}) INSERT({var(li)(−i,i−1) : 1}) $\mathtt { T N S E R T } ( \{ \mathtt { v a r } ( l _ { i } ) _ { ( i , i - 1 ) } ^ { + } : 1 \} )$ INSERT({var(li)(−i,i+1) : 1}) INSE $\mathfrak { A T } \big ( \{ \mathtt { v a r } ( l _ { i } ) _ { ( i , i + 1 ) } ^ { + } : 1 \} \big )$ INSERT({var(li)(−i,n) : 1}) $\mathtt { [ N S E R T ( \{ v a r ( \boldsymbol { l } _ { i } ) _ { ( i , n ) } ^ { + } : 1 \} ) }$ INSERT({var(li)(+1,i) : 1}) INSER $\mathbb { T } \big ( \{ \mathtt { v a r } ( l _ { i } ) _ { ( 1 , i ) } ^ { - } : 1 \} \big )$ INSERT({var(li)(+i−1,i) : 1}) $\mathrm { T N S E R T } ( \{ \operatorname { v a r } ( l _ { i } ) _ { ( i - 1 , i ) } ^ { - } : 1 \} )$ INSERT({var(li)(+i+1,i) : 1}) IN $\mathtt { S E R T } ( \{ \mathtt { v a r } ( l _ { i } ) _ { ( i + 1 , i ) } ^ { - } : 1 \} )$ INS $\mathtt { S R T } ( \{ \mathtt { v a r } ( l _ { i } ) _ { ( n , i ) } ^ { + } : 1 \} )$ $\mathtt { T N S E R T } ( \{ \mathtt { v a r } ( l _ { i } ) _ { ( n , i ) } ^ { - } : 1 \} )$ INSER $\cdot \mathbb { T } ( \{ \mathbf { v a r } ( c _ { i } ^ { \jmath } ) : 1 \} )$ I $\mathbb { N S E R T } ( \{ \mathbf { v a r } ( c _ { i } ^ { \jmath } ) : 1 \} )$ $\mathbb { I N S E R T } \big ( \{ \mathbf { v a r } ( c _ { i } ^ { j - 1 \mathrm { ~ m o d ~ } 3 } ) : 1 \} \big )$ $\left| ~ \mathrm { T N S E R T } ( \{ x \in \mathtt { A } _ { i } ^ { j } : 0 \} ) \right.$ $\mathtt { D E L E T E } ( \lambda r : \mathsf { k e y } ( r ) = \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } )$ $\mathtt { D E L E T E } ( \lambda r : \mathsf { k e y } ( r ) = \mathsf { v a r } ( l _ { i } ^ { \ j } ) _ { i } )$ 1 SELECT(λr : true) (a) Transaction $t _ { i } ^ { j }$ (b) Transaction $\neg t _ { i } ^ { j }$ (c) Transaction $S _ { i } ^ { j }$ indistinguishably $x$ when referring to a variable in $\varphi$ or to a homonymous key in $h _ { \varphi }$ . In addition, with the aim of simplifying the explanation, we assume hereinafter that any occurrence of indices $i , i ^ { \prime } , j , j ^ { \prime }$ satisfy that $1 \leq i , i ^ { \prime } \leq n$ and $1 \leq j , j ^ { \prime } \leq 3$ . For every clause $C _ { i } = l _ { i } ^ { 1 } \vee l _ { i } ^ { 2 } \vee l _ { i } ^ { 3 }$ , we create nine transactions denoted by $t _ { i } ^ { j }$ , $\neg t _ { i } ^ { j }$ and $S _ { i } ^ { j }$ . Figure B.2 shows in detail their definition, which we explain and justify during the following lines. The transaction $t _ { i } ^ { j }$ represents the state where $l _ { i } ^ { j }$ is satisfied while $\neg t _ { i } ^ { j }$ represents the state where $\boldsymbol { l } _ { i } ^ { j }$ is unsatisfied. Transaction $S _ { i } ^ { j }$ is in charge of selecting one of the two states. With this goal on mind, transactions $t _ { i } ^ { j }$ and $\neg t _ { i } ^ { j }$ contain a DELETE event that deletes the key $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ while $S _ { i } ^ { j }$ contains a SELECT event that does not read $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ in $h _ { \varphi }$ . By Definition 9, any witnesses $h ^ { \prime }$ of $h _ { \varphi }$ must read $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ from a transaction that deletes it. As $h _ { \varphi }$ contain only two transactions that deletes such key ( $t _ { i } ^ { j }$ and $\neg t _ { i } ^ { j }$ ), we can interpret that if $S _ { i } ^ { j }$ reads $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ from $t _ { i } ^ { j }$ in $h ^ { \prime }$ , then $l _ { i } ^ { j }$ is satisfied in $\varphi$ while otherwise it is not. For simplifying notation, as transactions $t _ { i } ^ { j } , \lnot t _ { i } ^ { j } , S _ { i } ^ { j }$ only have one read event, we define write-read dependencies directly from transactions instead of their read events. We also denote by $\mathtt { v a r } ( t _ { i } ^ { j } )$ and $\mathtt { v a r } ( \lnot t _ { i } ^ { j } )$ to the variable $\mathtt { v a r } ( l _ { i } ^ { j } )$ , associating each transaction with the variable of its associated literal. We divide the construction of the history $h _ { \varphi }$ in two main parts. In the first part, we relate transactions $t _ { i } ^ { j } , \lnot t _ { i } ^ { j }$ and $S _ { i } ^ { j }$ with the clause $C _ { i }$ , ensuring a satisfying valuation of clause $C _ { i }$ corresponds to a consistent history when restricted to its associated transactions. In the second part, we link transactions associated to different clauses (for example $t _ { i } ^ { j }$ with $t _ { i ^ { \prime } } ^ { j ^ { \prime } }$ , $i \neq i ^ { \prime }$ ), for ensuring that valuations are consistent between clauses (i.e. a variable is not assigned 1 in clause $C _ { i }$ and $0$ in clause $C _ { i ^ { \prime } }$ ). For the first part of $h _ { \varphi }$ ’s construction, we observe that “at least one literal among $l _ { i } ^ { 1 }$ , $l _ { i } ^ { 2 }$ or $l _ { i } ^ { 3 }$ must be satisfied” is equivalent to “ $\neg l _ { i } ^ { 1 }$ , $\neg l _ { i } ^ { 2 }$ and $\neg l _ { i } ^ { 3 }$ cannot be satisfied at the same time”. Thus, we add write-read dependencies to the history in such a way that if the three values that do not satisfy the clause are read by a witness $h ^ { \prime }$ of $h _ { \varphi }$ , axiom Read Committed forces $h ^ { \prime }$ to be inconsistent. We use an auxiliary key $c _ { i } ^ { j }$ written by transactions $t _ { i } ^ { j }$ , $\neg t _ { i } ^ { j }$ and $\neg t _ { i } ^ { ( j + 1 ) }$ mod 3 and read by transaction $S _ { i } ^ { j }$ ; enforcing $( \neg t _ { i } ^ { ( j + 1 ) \mathrm { ~ m o d ~ 3 ~ } } , S _ { i } ^ { j } ) \in \mathsf { w r } _ { c _ { i } ^ { j } }$ Thanks to variable $c _ { i } ^ { j }$ , if $( \neg t _ { i } ^ { j } , S _ { i } ^ { j } ) \in \mathsf { w r } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ in such witness $h ^ { \prime }$ , for any consistent execution of $h ^ { \prime }$ with commit order co, $( \neg t _ { i } ^ { j } , \neg t _ { i } ^ { j + 1 } \ \mathrm { m o d } \ 3 ) \ \in \ \mathsf { c o }$ . Hence, if $h ^ { \prime }$ is consistent, for every $i$ there must exist a $j$ $\dot { \mathrm { ~ s . t . } } ( t _ { i } ^ { j } , S _ { i } ^ { j } ) \ \in \ \mathsf { w r } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ . Otherwise, every commit order witnessing $h ^ { \prime }$ ’s consistency would be cyclic; which is a contradiction. In Figure B.3 we see how such co-cycle arise on any commit order witnessing $h _ { \varphi }$ ’s consistency; where $\varphi$ contains the clause $C _ { i } = x _ { i } \vee y _ { i } \vee \neg z _ { i }$ . $$ \begin{array} { r } \mathbf { s i g n } ( t ) = \{ \begin{array} { l l } { + \mathrm { i f ~ } t = t _ { i } ^ { j } \wedge l _ { i } ^ { j } = \mathsf { v a r } ( l _ { i } ^ { j } ) } \\ { - \mathrm { i f ~ } t = t _ { i } ^ { j } \wedge l _ { i } ^ { j } = \mathsf { \lnot v a r } ( l _ { i } ^ { j } ) } \\ { - \mathrm { i f ~ } t = \lnot t _ { i } ^ { j } \wedge l _ { i } ^ { j } = \mathsf { \lnot x a r } ( l _ { i } ^ { j } ) \quad \mathrm { o p s i g n } ( t ) = \{ \begin{array} { l l } { + \mathrm { i f ~ } \mathbf { s i g n } ( t ) = - \mathsf { a r } ( l _ { i } ^ { j } ) } \\ { - \mathrm { i f ~ } \mathbf { s i g n } ( t ) = + \mathsf { a r } ( l _ { i } ^ { j } ) } \\ { + \mathrm { i f ~ } t = - t _ { i } ^ { j } \wedge l _ { i } ^ { j } = \mathsf { \lnot x a r } ( l _ { i } ^ { j } ) } \\ { \perp \mathrm { o t h e r w i s e } } \end{array} } \end{array} \end{array} $$ For the second part of $h _ { \varphi }$ ’s construction, we utilize the functions sign and opsign described in Equation 9. The function sign describes when a literal $\boldsymbol { l } _ { i } ^ { j }$ is positive (i.e. $l _ { i } ^ { j } = \mathtt { v a r } ( l _ { i } ^ { j } ) )$ or negative (i.e. $l _ { i } ^ { j } = \neg \mathrm { v a r } ( l _ { i } ^ { j } ) )$ . If $l _ { i } ^ { j }$ is positive, it assigns to transaction $t _ { i } ^ { j }$ the symbol $^ +$ and to $\neg t _ { i } ^ { j }$ the symbol $-$ ; while if $l _ { i } ^ { j }$ is negative, the opposite. Such symbol is denoted the sign of a transaction. Hence, for each transaction $t _ { i }$ s.t. $\mathtt { s i g n } ( t _ { i } ) \neq \perp$ (i.e. $t _ { i }$ is either $t _ { i } ^ { j }$ or $\neg t _ { i } ^ { j }$ ), we introduce $n - 1$ INSERT events, one per key $\mathsf { v a r } ( l _ { i } ^ { j } ) _ { ( i , i ^ { \prime } ) } ^ { \mathsf { s i g n } ( t _ { i } ) }$ , $i ^ { \prime } \neq i$ , that write on that exact key. Those INSERT events are read by transaction $S _ { i ^ { \prime } } ^ { j ^ { \prime } }$ (i.e. $( t _ { i } , S _ { i ^ { \prime } } ^ { j ^ { \prime } } ) \in \mathsf { w r } _ { x }$ , where $x = \mathtt { v a r } ( l _ { i } ^ { j } ) _ { ( i , i ^ { \prime } ) } ^ { \mathtt { s i g n } ( t _ { i } ) } )$ . In addition, we introduce in $t _ { i }$ another $n - 1$ INSERT events that writes the key $\mathtt { v a r } ( t _ { i } ) _ { ( i ^ { \prime } , i ) } ^ { \circ \mathtt { p s i g n } ( t _ { i } ) }$ . Figure B.2 describe the full description of transactions $t _ { i } ^ { j }$ and $\neg t _ { i } ^ { j }$ . Fig. B.4: Commit edges between transactions of different sign associated to variable $x = \mathtt { v a r } ( l _ { i } ^ { j } )$ . Superindices $j$ are omitted for legibility. For simplicity on the Figure, we assume that $\mathbf { s } \mathrm { i } \mathtt { g n } ( t _ { k } ) = +$ and $\mathtt { s i g n } ( \lnot t _ { k } ) = -$ ; the situation generalizes for any other setting. If $S _ { i } ^ { j }$ would read $x _ { i }$ from $t _ { i }$ in a witness $h ^ { \prime }$ of $h _ { \varphi }$ (respectively $\neg t _ { i }$ ), for every $i ^ { \prime } \neq i$ $( t _ { i } , \lnot t _ { i ^ { \prime } } ) \in \mathsf { c o }$ , (resp. $( \neg t _ { i } , t _ { i ^ { \prime } } ) \in \mathsf { c o }$ ). The auxiliary keys $\mathsf { v a r } ( l _ { i } ^ { j } ) _ { ( i , i ^ { \prime } ) } ^ { \mathsf { s i g n } ( t _ { i } ) }$ and $\mathsf { v a r } ( l _ { i } ^ { j } ) _ { ( i , i ^ { \prime } ) } ^ { \mathsf { o p s i g n } ( t _ { i } ) }$ are key to ensure consistency between clauses. Intuitively, if $S _ { i } ^ { j }$ reads $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ from a positive transaction $t$ in a consistent execution of $h _ { \varphi }$ , $\xi = ( \overline { { { h } } } , \mathsf { c o } )$ , and $t ^ { \prime }$ is a negative transaction s.t. $\mathtt { v a r } ( t ) = \mathtt { v a r } ( t ^ { \prime } )$ , then $( t , t ^ { \prime } ) \in \mathsf { c o }$ ; where $\overline { { h } }$ is a witness of $h _ { \varphi }$ . Hence, any other transaction $S _ { i ^ { \prime } } ^ { j ^ { \prime } }$ must read $\mathtt { v a r } ( l _ { i ^ { \prime } } ^ { j ^ { \prime } } ) _ { i ^ { \prime } }$ also from a positive transaction in $\overline { { h } }$ ; otherwise co would be cyclic, which is impossible as co must be a total order. This phenomenon, that is depicted in Figure B.4, ensures that $\mathtt { v a r } ( l _ { i } ^ { j } )$ is always read from transactions with the same sign. In conclusion, we can establish consistent valuation of the variables of $\varphi$ based on the write-read dependencies of the witnesses of $h _ { \varphi }$ . We introduce a succint final part on the construction of $h _ { \varphi }$ for technical reasons. Indeed, any witness of $h _ { \varphi }$ ensures that $\mathsf { w r } _ { x } ^ { - 1 }$ is a total function for any $x \in { \mathsf { K e y s } }$ . We impose a few additional constraints on $h _ { \varphi }$ so we can better characterize the witnesses of $h _ { \varphi }$ . First, we assume that there exists an initial transaction that inserts, for every key $x$ , a dummy value different from $\dagger _ { x }$ (for example 0). Then, we impose that $t _ { i } ^ { j }$ and $\neg t _ { i } ^ { j }$ read every key $x$ from the initial transaction. Finally, for the case of transactions $S _ { i } ^ { j }$ , we define the set of auxiliary keys $\mathsf { A } _ { i } ^ { j }$ that contain every key different from $c _ { i } ^ { j } , \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } , \mathsf { v a r } ( l _ { i } ^ { j } ) _ { ( i ^ { \prime } , i ) } ^ { + }$ and $\mathsf { v a r } ( l _ { i } ^ { j } ) _ { ( i ^ { \prime } , i ) } ^ { - }$ . We introduce on $S _ { i } ^ { j }$ an INSERT event that writes every key in $\mathsf { A } _ { i } ^ { j }$ with an abritrary value (for example, 0). Hence, $S _ { i } ^ { j }$ reads every key in $A _ { i } ^ { j }$ from its own insert and no extra write-read dependency is required. With this technical addendum, we define $h _ { \varphi } = ( T , \mathsf { s o } , \mathsf { w r } )$ as the conjunction of all transactions and relations described above. In such case, the only information missing in $h _ { \varphi }$ to be a full history is rv−a1r(lj) (Sij ). We assume that no more variables than the ones aforementioned belong to Keys. The proof of NP-hardness goes as follows: first, we prove in Lemma 19 that $h _ { \varphi }$ is indeed a polynomial transformation of $\varphi$ . Then, as $\mathrm { i } { \mathsf { s o } } ( h _ { \varphi } )$ is saturable, by Theorem 2 we observe that it suffices to prove that $\varphi$ is satisfiable if and only there is a witness $\overline { { h } }$ of $h _ { \varphi }$ s.t. the relation ${ \mathsf { p c o } } _ { \overline { { h } } } = { \mathrm { S A T U R A T E } } { \big ( } { \overline { { h } } } , ( { \mathsf { s o } } \cup { \overline { { \mathsf { w r } } } } ) ^ { + } { \big ) }$ is acyclic. For simplifying the reasoning when $\mathrm { i } { \mathsf { s o } } ( h _ { \varphi } )$ has an arbitrary isolation configuration, we rely on Lemma 20 for reducing the proof at the case when $\mathsf { i s o } ( h _ { \varphi } ) = \mathsf { R C }$ . Hence, we prove on Lemma 24 that on one hand, whenever $\varphi$ is satisfiable we can construct a witness $\overline { { h } }$ of $h _ { \varphi }$ based on such assignment for which $\mathsf { p c o } _ { \overline { { h } } }$ is acyclic. For that, we require Lemmas 21 to 23. On the other hand, whenever there is a consistent witness $\bar { h }$ of $h _ { \varphi }$ , we prove on Lemma 25 that we can construct a satisfying assignment of $\varphi$ based on the write-read dependencies in $\bar { h }$ . In this case, we require once more Lemma 21. Lemma 19. The history $h _ { \varphi }$ has polynomial size on the length of $\varphi$ . Proof. Let $\varphi$ a CNF with $n$ clauses and 3 literals per clause. Then, as $\varphi$ has $3 n$ literals, $h _ { \varphi }$ employs $9 n$ transactions plus one additional one (init). The number of keys, $| \mathsf { K e y s } |$ , is quadratic on $n$ as transactions $t _ { i } ^ { j }$ and $\neg t _ { i } ^ { j }$ insert ${ \mathcal { O } } ( n )$ keys while $S _ { i } ^ { j }$ only insert keys also inserted by other transactions. Moreover, the number of events in $h _ { \varphi }$ , $\mathsf { e v e n t s } ( h _ { \varphi } )$ is in $\mathcal { O } ( | \mathsf { K e y s } | ) = \mathcal { O } ( n ^ { 2 } )$ as transactions $t _ { i } ^ { j } , \lnot t _ { i } ^ { j }$ have one INSERT event per keys inserted (and they insert ${ \mathcal { O } } ( n )$ keys) and one DELETE event and transactions $\dot { S } _ { i } ^ { j }$ only have two events. In addition, ${ \mathsf { s o } } \subseteq T \times T$ and wr $\subseteq { \mathsf { K e y s } } \times { \mathsf { e v e n t s } } ( h _ { \varphi } ) \times { \mathsf { e v e n t s } } ( h _ { \varphi } )$ ; so their size is also polynomial on $n$ . Thus, $h _ { \varphi }$ is a polynomial transformation of $\varphi$ . □ One caveat of $h _ { \varphi }$ is that its isolation configuration is unknown. Lemma 20 express that, in the particular case of $h _ { \varphi }$ , all saturable isolation levels stronger than RC are equivalent (they impose the same constraints). Hence, thereinafter we can assume without loss of generality that $| { \mathsf { s o } } ( h _ { \varphi } ) = \mathtt { R C }$ . Lemma 20. Under history $h _ { \varphi }$ , $\mathfrak { i } \mathfrak { s o } ( h _ { \varphi } )$ is equivalent to RC (i.e. $\mathrm { i } { \mathsf { s o } } ( h _ { \varphi } )$ is both weaker and stronger than RC.) Proof. Let $\overline { { h } } = ( T , { \sf s o } , \overline { { \sf w r } } )$ be any witness of $h _ { \varphi }$ and let $h ^ { \mathrm { R C } }$ be the history that only differ with $\overline { { h } }$ on its isolation configuration ( $\mathsf { i s o } ( h ^ { \mathrm { R C } } ) = \mathsf { R C }$ instead of $\mathfrak { i } \mathfrak { s o } ( h )$ ). We prove that $\bar { h }$ and $h ^ { \tt R C }$ are simultaneously consistent or inconsistent. As both $\mathrm { i } { \mathsf { s o } } ( h _ { \varphi } )$ and RC are saturable, by Theorem 2, the proof is equivalent to prove that $\mathsf { p c o } _ { \overline { { h } } }$ and $\mathsf { p c o } _ { \mathsf { R C } }$ are simultaneously cyclic or acyclic; where $\mathsf { p c o } _ { \overline { { h } } } =$ saturate $: ( \overline { { h } } , ( \mathsf { s o U \overline { { w r } } } ) ^ { + } )$ and $\mathsf { p c o } _ { \mathsf { R C } } = \mathrm { S A T U R A T E } \big ( h ^ { \mathrm { R C } } , \big ( \mathrm { s o } \sqcup \overline { { \mathsf { W } \mathsf { r } } } \big ) ^ { + } \big )$ . We prove that the two relations coincide, which allow us to conclude the result. We observe that as $\mathfrak { i } \mathfrak { s o } ( \overline { { h } } )$ is weaker than RC, ${ \mathsf { p c o } } _ { \mathsf { R C } } \subseteq { \mathsf { p c o } } _ { \overline { { h } } }$ . Thus, it suffices to prove that $\mathsf { p c o } _ { \overline { { h } } } \subseteq \mathsf { p c o } _ { \mathsf { R C } }$ . Let $t , t ^ { \prime }$ be two transactions s.t. $( t , t ^ { \prime } ) \in \mathsf { p c o } _ { \overline { { h } } }$ and let us prove that $( t , t ^ { \prime } ) \in { \mathsf { p c o } } _ { \mathsf { R C } }$ . As $( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + } \subseteq \mathsf { p c o } _ { \mathtt { R C } }$ ; we can assume without loss of generality that $( t , t ^ { \prime } ) \in \mathsf { p c o } _ { \overline { { h } } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ . In such case, there exists $r \in { }$ $\mathsf { r e a d s } ( \overline { { h } } ) , x \in \mathsf { K e y s }$ and $v \in \mathsf { v i s } ( \mathsf { i s o } ( \overline { { h } } ) ( \mathsf { t r } ( r ) ) )$ s.t. $t ^ { \prime } = \overline { { \mathsf { w r } } } _ { x } { } ^ { - 1 } ( r )$ and $\mathsf { v } ( \mathsf { p c o } _ { \overline { { h } } } ) ( t , r , x )$ holds in $\overline { { h } }$ . As $\mathfrak { i } \mathfrak { s o } ( \overline { { h } } )$ is saturable, $( t , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ . First, we note that $\mathfrak { t r } ( r ) \ \ne$ init as it does not contain any read event. As $t ^ { \prime }$ is a $( \mathsf { s o U } \overline { { \mathsf { w r } } } ) ^ { + }$ -predecessor of $\mathsf { t r } ( r )$ , and transactions $S _ { i } ^ { j }$ are $( \mathsf { s o U } \overline { { \mathsf { w r } } } ) .$ - maximal, $t ^ { \prime }$ is not a $S _ { i } ^ { j }$ transaction; so it must be a $t _ { i } ^ { j }$ transaction. However, note that by construction of transactions $t _ { i } ^ { j }$ , their only $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ -predecessor is init. Thus, their only $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ -succesors can be transactions $S _ { i ^ { \prime } } ^ { j ^ { \prime } }$ ; transactions that do not have (so ∪ wr)-successors. In conclusion, if $( t ^ { \prime } , \mathsf { t r } ( r ) ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ , $( t ^ { \prime } , \mathsf { t r } ( r ) ) \in \mathsf { s o } \cup \overline { { \mathsf { w } \mathsf { r } } }$ , and therefore, $( t ^ { \prime } , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ; \mathsf { p o } ^ { * }$ . □ Lemma 21 states a characterization of all commit order cycles imposed by the axiom RC that only relate the nine transactions associated to a clause in $\varphi$ . Lemma 21. Let $\overline { { h } } = ( T , { \sf s o } , \overline { { \sf w r } } )$ a witness of $h _ { \varphi }$ . For a fixed $i$ , there is a pcohcycle relating init, $t _ { i } ^ { j } , \lnot t _ { i } ^ { j }$ and $S _ { i } ^ { j }$ , $1 \le j \le 3$ in $h$ if and only if for all $1 \le j \le 3$ , $( \neg t _ { i } ^ { j } , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) }$ in $\overline { { h } }$ . Proof. A graphical description of the different cases of this proof can be seen in Figure B.3. ⇐= Let us suppose that for every $j$ , $1 ~ \leq ~ j ~ \leq ~ 3$ , $( \neg t _ { i } ^ { j } , S _ { i } ^ { j } ) \ \in \ \overline { { \mathsf { W } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ . As $\neg t _ { i } ^ { j }$ writes $c _ { i } ^ { j }$ and $( \neg t _ { i } ^ { ( j + 1 ) \mathrm { ~ m o d ~ 3 ~ } } , S _ { i } ^ { j } ) \ \in \ \overline { { \mathsf { W r } } } _ { c _ { i } ^ { j } }$ , by axiom RC we deducei that, $( \neg t _ { i } ^ { j } , \neg t _ { i } ^ { ( j + 1 ) \mathrm { ~ m o d ~ 3 } } ) \in \mathfrak { p c o } _ { \overline { { h } } }$ . Therefore, there is a pco $^ h$ -cycle between transactions $\neg t _ { i } ^ { 1 } , \neg t _ { i } ^ { 2 }$ and $\neg t _ { i } ^ { 3 }$ . Y First, note that ${ \mathsf { s o U } } { \overline { { \mathsf { w r } } } }$ is acyclic, so any $\mathsf { p c o } _ { \overline { { h } } }$ -cycle has to include at least one $\mathsf { p c o } _ { \overline { { h } } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ -dependency. Hence, let $t , t ^ { \prime }$ be distinct transactions such that $( t ^ { \prime } , t ) \in \mathsf { p c o } _ { \overline { { h } } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ is an edge belonging to such cycle. By axiom Read Committed, this implies that there exists a read event $r$ and a key $x$ s.t. $( t , r ) \in \overline { { \mathsf { w r } } } _ { x }$ and $( t , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ; \mathsf { p o } ^ { * }$ . Note that in particular this means that $t ^ { \prime }$ and $t$ are two distinct $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ -succesors of $\mathsf { t r } ( r )$ . We observe that $\mathrm { t r } ( r ) \neq \mathrm { i n i t }$ as init does not contain any read event. Moreover, $\mathsf { t r } ( r ) \neq t _ { i } ^ { j } , \neg t _ { i } ^ { j }$ as those transactions have only one $( \mathsf { s o U } \overline { { \mathsf { w r } } } )$ -predecessor, init. Hence, there exists $j$ s.t. $\mathsf { t r } ( r ) = S _ { i } ^ { j }$ . In this case, every ke written by $t _ { i } ^ { j }$ or $\neg t _ { i } ^ { j }$ besides $c _ { i } ^ { j }$ and $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ is read by $S _ { i } ^ { j }$ from the INSERT event in its own transaction. We distinguish between two cases: $x = \mathbf { v a r } ( l _ { i } ^ { j } ) _ { i }$ : The only transactions that write $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ are $t _ { i } ^ { j }$ , $\neg t _ { i } ^ { j }$ , init and transactions $S _ { i ^ { \prime } } ^ { j ^ { \prime } }$ . However, transactions $S _ { i ^ { \prime } } ^ { j ^ { \prime } }$ have only one $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ - succesor, init in $h _ { \varphi }$ . As $\forall x \neq \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } , \overline { { \mathsf { w r } } } _ { x } ^ { - 1 } ( S _ { i } ^ { j } ) = \mathsf { w r } _ { x } ^ { - 1 } ( S _ { i } ^ { j } )$ , one of them, init must be either $t$ or $t ^ { \prime }$ . However, $t \neq \mathrm { i n i t }$ as init does not delete $\mathtt { v a r } ( l _ { i } ^ { \mathcal { I } } ) _ { i }$ ; so $t = { \mathrm { i n i t } }$ . But in such case, $( t ^ { \prime } , t ) \in \mathsf { s o }$ ; which contradicts that $( t ^ { \prime } , t ) \in \mathsf { p c o } _ { \overline { { h } } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ . This proves that this case is impossible. – $\underline { { x } } = c _ { i } ^ { j }$ : In such case, as $( \neg t _ { i } ^ { j + 1 \mathrm { ~ m o d ~ 3 } } , S _ { i } ^ { j } ) \in \mathsf { w r } _ { c _ { i } ^ { j } }$ , $t = \neg t _ { i } ^ { j + 1 }$ mod 3. The only transactions that writes $c _ { i } ^ { j }$ and are $\left( \mathsf { S o U } \overline { { \mathsf { W r } } } \right)$ -predecessors of $S _ { i } ^ { j }$ are init, $t _ { i } ^ { j }$ and $\neg t _ { i } ^ { j }$ . As $( \mathrm { i n i t } , t ) \in \mathsf { s o }$ ; $t \neq \mathrm { i n i t }$ . Thus, any of the other two transactions are candidates to be $t ^ { \prime }$ . Note that $( t ^ { \prime } , t ) \in { \mathsf { p c o } } _ { \overline { { h } } }$ is part of a cycle; so let $t ^ { \prime \prime }$ be a transaction s.t. (t′′, t′) ∈ pcoh. If $( t ^ { \prime \prime } , t ^ { \prime } ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } )$ would hold, as for every key $x$ , $\overline { { \mathsf { W } } } \mathsf { r } _ { x } ^ { - 1 } ( t ) = \mathsf { w r } _ { x } { } ^ { - 1 } ( t )$ , $t ^ { \prime \prime } = \mathrm { i n i t }$ . As $( t ^ { \prime } , t )$ is part of a $\mathsf { p c o } _ { \overline { { h } } }$ cycle and $t \neq { \mathrm { i n i t } }$ , there must exist a transaction $t ^ { \prime \prime \prime } \neq t ^ { \prime \prime } = \mathrm { i n i t }$ s.t. $( t ^ { \prime \prime \prime } , t ^ { \prime \prime } ) \in \mathsf { p c o } _ { \overline { { h } } }$ is part of such cycle. Note that $( t ^ { \prime \prime \prime } , \mathrm { i n i t } ) \in \mathsf { p c o } _ { \overline { { h } } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { W } \boldsymbol { \mathsf { r } } } } )$ by construction of $h _ { \varphi }$ . Hence, there exists a key $y$ and a read event $r ^ { \prime }$ s.t. $( \mathrm { i n i t } , r ^ { \prime } ) \in \overline { { \mathsf { w r } } } _ { y }$ and $( t ^ { \prime \prime \prime } , r ^ { \prime } ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ; \mathsf { p o } ^ { * }$ . By construction of $h _ { \varphi }$ , if $( \mathrm { i n i t } , r ^ { \prime } ) \in \overline { { \mathsf { w r } } } _ { y }$ then $\mathsf { t r } ( \boldsymbol { r } ^ { \prime } )$ must be $t _ { i } ^ { j ^ { \prime } }$ for some $j ^ { \prime }$ . But as we mentioned earlier, such transactions only have one $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ - predecessor, init; so it is impossible that $( t ^ { \prime \prime } , t ^ { \prime } ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } )$ . Hence, $( t ^ { \prime \prime } , t ^ { \prime } ) \in \mathsf { p c o } _ { \overline { { h } } } \setminus \left( \mathsf { s o U } \overline { { \mathsf { W } \mathsf { r } } } \right)$ . Replicating the same argument as before we can deduce that there exists a j′ s.t. (t′′, Sij ) ∈ wrcj , t′ = ¬tij′+1 mod 3 and $t ^ { \prime \prime }$ is either $t _ { i } ^ { j ^ { \prime } }$ or $\neg t _ { i } ^ { j ^ { \prime } }$ . However, as discussed before, $t ^ { \prime }$ could only be $\neg t _ { i } ^ { j }$ or $t _ { i } ^ { j }$ . Therefore, $t ^ { \prime } = \lnot t _ { i } ^ { j }$ and $j = j ^ { \prime } + 1$ mod 3. Finally, as $t ^ { \prime \prime } \neq t$ , there must exist a transaction $t ^ { \prime \prime \prime }$ s.t. $( t ^ { \prime \prime \prime } , t ^ { \prime \prime } ) \in \mathsf { p c o } _ { \overline { { h } } }$ . By the same argument once more, there exists an index j′′ s.t. t′′ = ¬tij′′+1 mod 3, $( t ^ { \prime \prime \prime } , S _ { i } ^ { j ^ { \prime \prime } } ) \in \overline { { \mathsf { w r } } } _ { c _ { i } ^ { j ^ { \prime \prime } } }$ and $t ^ { \prime \prime \prime }$ is either $t _ { i } ^ { j ^ { \prime \prime } }$ or $\neg t _ { i } ^ { j ^ { \prime \prime } }$ Once more, as $t ^ { \prime \prime }$ could only be $\neg t _ { i } ^ { j ^ { \prime } }$ or $t _ { i } ^ { j ^ { \prime } }$ , we deduce that $j ^ { \prime } = j ^ { \prime \prime } + 1$ mod 3 and $t ^ { \prime \prime } = \lnot t _ { i } ^ { j ^ { \prime } } .$ Note that in this case $j = j ^ { \prime \prime } + 2$ mod 3. Thus, $t = \lnot t _ { i } ^ { j + 1 \mathrm { ~ m o d ~ 3 ~ } } = \lnot t _ { i } ^ { j ^ { \prime \prime } } = t ^ { \prime \prime \prime }$ . In conclusion, if such cycle exists it contain exactly the transactions $\neg t _ { i } ^ { 1 } , \neg t _ { i } ^ { 2 }$ and $\neg t _ { i } ^ { 3 }$ and for each of them, (¬tij , Sij ) ∈ wrvar(lj ) . Lemma 22 states that any $\mathsf { p c o } _ { \overline { { h } } }$ -dependencies imposed by the axiom RC on transactions $t , t ^ { \prime }$ associated to diferent clauses in $\varphi$ are related to valuation choices of literals in $\varphi$ . Lemma 22. Let $\overline { { h } } = ( T , { \sf s o } , \overline { { \sf w r } } )$ a witness of the history $h _ { \varphi }$ . For every pair of transactions $t , t ^ { \prime }$ and indices $i , j$ , if $\mathsf { v a r } ( t ) = \mathsf { v a r } ( t ^ { \prime } )$ , $t ^ { \prime }$ deletes $\mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } , ~ t \neq$ tj+1 mod 3 and $( t ^ { \prime } , t ) \in \mathsf { p c o } _ { \overline { { h } } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ in $h$ , then $( t ^ { \prime } , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ . Proof. Let $i , j$ be indices and $t , t ^ { \prime }$ be distinct transactions such that $t \ \ne$ ¬tij+1 mod 3 and (t′, t) ∈ pcoh \ (so ∪ wr)+. Hence, (t′, t) ∈ pcoh \ (so ∪ wr)+, by axiom Read Committed, there must exist a key $x$ and a read event $r$ s.t. $( t , r ) \in \overline { { \mathsf { w r } } } _ { x }$ , $t ^ { \prime }$ writes $x$ and $( t ^ { \prime } , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ; \mathsf { p o } ^ { * }$ . We characterize the possible candidates of transactions $t , t ^ { \prime } , \mathsf { t r } ( r )$ and key $x$ . First, as $\mathsf { t r } ( r )$ has two different $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ -predecessors, $\mathrm { t r } ( r ) \ne \mathrm { i n i t } , t _ { i ^ { \prime } } ^ { j ^ { \prime } }$ ; for any indices $i ^ { \prime } , j ^ { \prime }$ . Hence, there must exist indices $i ^ { \prime } , j ^ { \prime }$ s.t. $\mathsf { r } ( r ) = S _ { i ^ { \prime } } ^ { j ^ { \prime } }$ . Next we deduce that $t$ and $t ^ { \prime }$ belongs to different clauses. As $t ^ { \prime }$ deletes $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ , we deduce that t′ is either tij or ¬tij. Hence, as t ̸= ¬tij+1 mod 3, neither $t _ { i } ^ { j }$ nor $\neg t _ { i } ^ { j }$ are $\left( \mathsf { S o U } \overline { { \mathsf { W r } } } \right)$ -predecessors of $S _ { i } ^ { j }$ but both $t$ and $t ^ { \prime }$ are $\left( \mathsf { s o U } \overline { { \mathsf { W r } } } \right)$ -predecessors of $S _ { i } ^ { j }$ , we then deduce that $t$ and $t ^ { \prime }$ belong to different clauses. Finally, we deduce that $\textit { i } ^ { \prime } = \textit { i }$ and $( t ^ { \prime } , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ . As $x$ is written by $t$ and $t ^ { \prime }$ and $x \notin \mathsf { A } _ { i ^ { \prime } } ^ { j ^ { \prime } }$ ; either $t$ or $t ^ { \prime }$ are associated to the same clause as $S _ { i ^ { \prime } } ^ { j ^ { \prime } }$ . If $t$ would be associated to clause $C _ { i ^ { \prime } }$ , then $t$ should be either $t _ { i ^ { \prime } } ^ { j ^ { \prime } }$ or $\neg t _ { i ^ { \prime } } ^ { j ^ { \prime } }$ and $x = \mathtt { v a r } ( l _ { i ^ { \prime } } ^ { j ^ { \prime } } ) _ { i ^ { \prime } } .$ However, this contradicts that $t ^ { \prime }$ writes $\mathtt { v a r } ( l _ { i ^ { \prime } } ^ { j ^ { \prime } } ) _ { i ^ { \prime } }$ as $t ^ { \prime }$ is either $t _ { i } ^ { j }$ or $\neg t _ { i } ^ { j }$ . Hence, as $t$ is not associated to clause $C _ { i ^ { \prime } }$ , $i ^ { \prime } = i$ . As $( t ^ { \prime } , S _ { i } ^ { j } ) \not \in ( \mathsf { s o } \cup \mathsf { w r } )$ but $( t ^ { \prime } , S _ { i } ^ { j } ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } )$ and $\overline { { \mathsf { W r } } } _ { y } = \mathsf { w r } _ { y }$ for any key $y \ne \tt v a r ( l _ { \it i } ^ { j } ) _ { i }$ , we conclude that (t′, Sij ) ∈ wrvar(lj )i . □ Lemma 23 states that $\mathsf { p c o } _ { \overline { { h } } }$ does not contain tuples of transactions associated to literals with equal variable and sign. Lemma 23. Let $\overline { { h } } \ = \ ( T , { \scriptstyle \mathsf { s o } } , \overline { { \mathsf { w r } } } )$ a witness of the history $h _ { \varphi }$ . For every pair of transactions $t , t ^ { \prime }$ and indices $i , j$ , if $\mathtt { s i g n } ( t ) = \mathtt { s i g n } ( t ^ { \prime } )$ , $\mathsf { v a r } ( t ) = \mathsf { v a r } ( t ^ { \prime } ) =$ $\mathtt { v a r } ( l _ { i } ^ { j } )$ , $( t , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ and $t ^ { \prime } \neq \lnot t _ { i } ^ { j - 1 }$ mod 3 then $( t ^ { \prime } , t ) \notin \mathsf { p c o } _ { \overline { { h } } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ . Proof. We reason by contradiction. Let us suppose that $t , t ^ { \prime }$ are a pair of transactions such that $\mathtt { s i g n } ( t ) = \mathtt { s i g n } ( t ^ { \prime } )$ , $\mathsf { v a r } ( \bar { t } ) = \mathsf { v a r } ( t ^ { \prime } ) = \mathsf { v a r } ( \bar { l } _ { i } ^ { j } ) , ( t , S _ { i } ^ { j } ) \ \in$ $\overline { { \mathsf { W } \mathsf { r } } } _ { \mathtt { V a r } ( t ) _ { i } }$ , $t ^ { \prime } \neq \lnot t ^ { j - 1 }$ mod 3 and $( t ^ { \prime } , t ) \in \mathsf { p c o } _ { \overline { { h } } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ , for some indices $i , j$ . As $( t , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ , $t$ is either $t _ { i } ^ { j }$ or $\neg t _ { i } ^ { j }$ . Moreover, as $( t ^ { \prime } , t ) \in \mathsf { p c o } _ { \overline { { h } } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ^ { + }$ , by axiom Read Committed we deduce that there exists a key $x$ and a read event $r$ s.t. $( t ^ { \prime } , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ; \mathsf { p o } ^ { * }$ , $( t , r ) \in \overline { { \mathsf { w r } } } _ { x }$ and $t ^ { \prime }$ writes $x$ . We first prove that $t ^ { \prime }$ and $t$ are associated to different clauses. As $( { \mathrm { i n i t } } , t ) \in$ so, $t ^ { \prime } \neq$ init. Next, as $( t ^ { \prime } , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w } \mathsf { r } } } ) ; \mathsf { p o } ^ { * }$ and transactions $S _ { i ^ { \prime } } ^ { j ^ { \prime } }$ are ${ \mathsf { s o U } } { \overline { { \mathsf { W r } } } } -$ maximal, we deduce that there must exist a pair of indices $i ^ { \prime } , j ^ { \prime }$ s.t. $t ^ { \prime } = t _ { i ^ { \prime } } ^ { j ^ { \prime } }$ or $\neg t _ { i ^ { \prime } } ^ { j ^ { \prime } }$ . Moreover, as $t ^ { \prime } \neq \lnot t _ { i } ^ { j - 1 }$ mod 3, $t$ is either $t _ { i } ^ { j }$ or $\neg t _ { i } ^ { j }$ . In addition, as in any witness of $h _ { \varphi }$ both $t _ { i } ^ { j }$ and $\neg t _ { i } ^ { j }$ cannot be $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ -predecessors of $S _ { i } ^ { j }$ , we deduce that $i ^ { \prime } \neq i$ . Finally we contradict the hypothesis proving that $\mathsf { s i g n } ( t ) \ne \mathsf { s i g n } ( t ^ { \prime } )$ . If $i ^ { \prime } \neq i$ , $t \neq \mathrm { i n i t }$ but $t$ is a $\overline { { \mathsf { W } \mathsf { r } } }$ -predecessor of $\mathsf { t r } ( r )$ , there must exist indices $i ^ { \prime \prime } , j ^ { \prime \prime }$ s.t. $\mathsf { t r } ( r ) = S _ { i ^ { \prime \prime } } ^ { j ^ { \prime \prime } }$ . Hence, as $x \in \mathbb { A } _ { i ^ { \prime \prime } } ^ { j ^ { \prime \prime } }$ and it is written by $t$ and $t ^ { \prime } , \ i ^ { \prime \prime }$ must be either $i ^ { \prime }$ or $i$ . However, $i ^ { \prime \prime } \neq i$ as in that case, $x = \mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ and $t ^ { \prime }$ does not write $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ . Hence, $i ^ { \prime \prime } = i ^ { \prime } \neq i$ and $x = \mathtt { v a r } ( l _ { i } ^ { j } ) _ { ( i ^ { \prime } , i ) } ^ { \mathtt { s i g n } ( t ^ { \prime } ) }$ . However, as $t$ writes $x$ by construction of $h _ { \varphi }$ , we must conclude that $\mathtt { s i g n } ( t ) \neq \mathtt { s i g n } ( t ^ { \prime } )$ . Thus, as we reached a contradiction, the lemma holds. □ Lemma 24. For every boolean formula $\varphi$ , if $\varphi$ is satisfiable then there is $a$ consistent witness $\bar { h }$ of $h _ { \varphi }$ . Proof. Let $\alpha : \mathsf { V a r s } ( \varphi ) \to \{ 0 , 1 \}$ an assignment that satisfies $\varphi$ . Let $h _ { \varphi } ^ { \alpha } ~ =$ $( T , { \mathsf { s o } } , { \overline { { \mathsf { w r } } } } )$ the extension of $h _ { \varphi }$ s.t. for every $i , j , \ ( t _ { i } ^ { j } , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ if $l _ { i } ^ { j } [ \alpha ( \mathtt { v a r } ( l _ { i } ^ { j } ) ) / \mathtt { v a r } ( l _ { i } ^ { j } ) ] = \mathtt { t r u e }$ and $( \neg t _ { i } ^ { j } , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ otherwise. Note that for every two transactions $t , t ^ { \prime }$ s.t. $\mathtt { v a r } ( t ) = \mathtt { v a r } ( t ^ { \prime } )$ , $\alpha ( \mathsf { v a r } ( t ) ) = \alpha ( \mathsf { v a r } ( t ^ { \prime } ) )$ . Hence, if $( t , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( t ) _ { i } }$ and $( t ^ { \prime } , S _ { i ^ { \prime } } ^ { j ^ { \prime } } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( t ^ { \prime } ) _ { i ^ { \prime } } }$ then $\mathtt { s i g n } ( t ) = \mathtt { s i g n } ( t ^ { \prime } )$ . In addition, by construction of $h _ { \varphi }$ , for every transaction $S _ { i } ^ { j }$ , the only variable $x$ such that $\mathsf { w r } _ { x } ^ { - 1 } ( S _ { i } ^ { j } ) \uparrow$ is $x = \mathtt { v a r } ( l _ { i } ^ { j } )$ . Thus, for every $x \in { \mathsf { K e y s } }$ , $\overline { { \mathsf { W } \mathsf { r } } } _ { x } ^ { - 1 }$ is defined for any read that does not read locally and therefore, $h _ { \varphi } ^ { \alpha }$ is a full history that extends $h _ { \varphi }$ . Let us prove that $h _ { \varphi } ^ { \alpha }$ is consistent. As mentioned before, thanks to Theorem 2, we can reduce the problem of checking if $h _ { \varphi } ^ { \alpha }$ is consistent to the problem of checking if $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } = \mathrm { S A T U R A T E } \big ( h _ { \varphi } ^ { \alpha } , \big ( \mathsf { s o } \sqcup \overline { { \mathsf { W } \mathsf { r } } } \big ) ^ { + } \big )$ is acyclic. We reason by contradiction, assuming there is a $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ -cycle and reaching a contradiction. Clearly $\mathsf { S O } \bigcup \overline { { \mathsf { W } \mathsf { r } } }$ is acyclic as $\mathsf { S O } \bigcup$ wr is acyclic, transactions $S _ { i } ^ { j }$ are (so ∪ wr)-maximal and wr \ wr only contains tuples $( t _ { i } ^ { j } , S _ { i } ^ { j } )$ or $( \neg t _ { i } ^ { j } , S _ { i } ^ { j } )$ . Thus, any $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ -cycle in $h _ { \varphi } ^ { \alpha }$ contains at least one edge $( t ^ { \prime } , t ) \in \mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } \setminus ( \mathsf { s o U } \overline { { \mathsf { w r } } } ) ^ { + }$ ; so let be $t , t ^ { \prime }$ such a pair of distinct transactions s.t. $( t ^ { \prime } , t ) \in \mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { W } } } ) ^ { + }$ and $( t ^ { \prime } , t )$ is part of the $\mathsf { P } ^ { \mathsf { C O } } h _ { \varphi } ^ { \alpha }$ -cycle. First, we observe that by construction of $h _ { \varphi }$ , transactions $S _ { i } ^ { j }$ are $( \mathsf { s o U } \overline { { \mathsf { w r } } } ) .$ - maximal. Moreover, they also pcohα-maximal: by contradiction, if there was a transaction $u _ { i } ^ { j }$ s.t. $( S _ { i } ^ { j } , u _ { i } ^ { j } ) \in \mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } \setminus ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) +$ , by axiom Read Committed, there would be a variable a read event $r$ s.t. $( S _ { i } ^ { j } , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ; \mathsf { p o } ^ { * }$ ; which is impossible. We also observe that init is not only $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ -minimal but $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } -$ minimal. By the same argument, if there would be a transaction $u \ne \mathrm { i n i t }$ s.t. $( u , \mathrm { i n i t } ) \in \mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } \setminus ( \mathsf { s o u l } \overline { { \mathsf { w r } } } ) ^ { + }$ , by axiom Read Committed, there should be a read event $r$ and a key $x$ s.t. $( \mathrm { i n i t } , r ) \ \in \ \overline { { \mathsf { w r } } } _ { x }$ and $( u ^ { \prime } , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ; \mathsf { p o } ^ { \ast }$ . However, by construction of $h _ { \varphi }$ , the only transactions that read a variable from init are $t _ { i } ^ { j }$ ; transactions with only one $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ -predecessor. This shows that such transaction $u$ does not exist. Altogether, the $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ -cycle can only contain pairs of transactions $t _ { i } ^ { j }$ and $\neg t _ { i } ^ { j }$ . In particular, as transactions $t _ { i } ^ { j }$ have only one $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ -predecessor, init, such $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ -cycle is in $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } \setminus ( \mathsf { s o U } \overline { { \mathsf { w r } } } ) ^ { + }$ . Next, we note that, as every clause $C _ { i }$ is satisfied by $\alpha$ , there exists an index $j$ s.t. $( t _ { i } ^ { j } , S _ { i } ^ { j } ) \in \mathsf { v a r } ( l _ { i } ^ { j } )$ . By Lemma 21, we know there is no -cycle relating the nine transactions associated with clause $C _ { i }$ and init. Therefore, a $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } -$ cycle has to involve at least two transactions from different clauses. Hence, we can assume without loss of generality that $t$ and $t ^ { \prime }$ belong to the same clause. As $( t ^ { \prime } , t ) \in \mathsf { p c o } _ { h _ { \omega } ^ { \alpha } } \setminus ( \mathsf { s o U } \overline { { \mathsf { w r } } } ) ^ { + }$ , there must exist a key $x$ and a read event $r _ { x }$ s.t. $t$ writes $x$ , $\left( t , r _ { x } \right) ^ { r } \in \overline { { \mathsf { w r } } } _ { x }$ and $( t ^ { \prime } , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w r } } } ) ; \mathsf { p o } ^ { * }$ . By construction of $h _ { \varphi }$ , the only case when two transactions from different clauses write the same variable is when $\mathsf { v a r } ( t ) = \mathsf { v a r } ( t ^ { \prime } )$ . In particular, as $t$ and $t ^ { \prime }$ belong to different clauses, there must exist indices $i , i ^ { \prime }$ s.t. $x = \mathtt { v a r } ( t ) _ { ( i ^ { \prime } , i ) } ^ { \mathtt { s i g n } ( t ) }$ and $\mathsf { t r } ( r _ { x } ) = S _ { i } ^ { j }$ . Hence, there is only one candidate for transaction $t ^ { \prime } \colon t _ { i } ^ { j }$ if $\mathtt { s i g n } ( t _ { i } ^ { j } ) = \mathtt { s i g n } ( t ^ { \prime } ) = \mathtt { o p s i g n } ( t )$ and $\neg t _ { i } ^ { j }$ otherwise. Therefore, as $t ^ { \prime } , \mathsf { t r } ( r _ { x } )$ belong to the same clause and $t ^ { \prime }$ is a $\left( \mathsf { s o U } \overline { { \mathsf { w r } } } \right)$ -predecessor of $\mathsf { t r } ( r _ { x } )$ , we conclude that $( t ^ { \prime } , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( t ^ { \prime } ) _ { i } }$ . To reach a contradiction, we find a pair of distinct transactions $\tilde { t } , \tilde { t }$ in the $\mathsf { p c o } _ { h _ { \mathsf { \tiny ( r ) } } ^ { \alpha } }$ -cycle from different clauses but associated to the same variable. First, as $\left( t ^ { \prime } , t \right) ^ { \ast }$ is part of the $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ -cycle, there exists a $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ -predecessor of $t ^ { \prime }$ , $t ^ { \prime \prime }$ s.t. $( t ^ { \prime \prime } , t ^ { \prime } ) \in \mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ is part of the $\mathsf { P C O } _ { h _ { \varphi } ^ { \alpha } }$ -cycle. As we mentioned before, $( t ^ { \prime \prime } , t ^ { \prime } ) \in$ $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } \setminus \mathsf { ( s o \cup \overline { { \mathsf { w r } } } ) ^ { + } }$ . Then, there must exist a key $y$ and a read event $r _ { y }$ s.t. $t ^ { \prime \prime }$ writes $y$ , $( t ^ { \prime } , r _ { y } ) \in \overline { { \mathsf { w r } } } _ { y }$ and $( t ^ { \prime \prime } , r ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w } \mathsf { r } } } ) ; \mathsf { p o } ^ { * }$ . Two cases arise: – $\underline { { t ^ { \prime \prime } } }$ is not associated to clause $\underline { { i } }$ : In this case, as both $t ^ { \prime } , t ^ { \prime \prime }$ write variable $y$ , by construction of $h _ { \varphi }$ we observe that $\mathtt { v a r } ( t ^ { \prime \prime } ) = \mathtt { v a r } ( t ^ { \prime } )$ . Thus, we denote $\tilde { t } = t ^ { \prime \prime }$ and $\hat { t } = t ^ { \prime }$ . $- \ { \underline { { t ^ { \prime \prime } } } }$ is associated to clause $i$ : In this case, $t ^ { \prime \prime } \ne \neg t ^ { \prime }$ as no transaction in $h$ have both $t ^ { \prime }$ and $\neg t ^ { \prime }$ as $\left( \mathsf { S o U } \overline { { \mathsf { W r } } } \right)$ -predecessors. Hence, as no clause has two literals referring to the same variable, $\mathtt { v a r } ( t ^ { \prime } ) \neq \mathtt { v a r } ( t ^ { \prime \prime } )$ . Thus, as $t ^ { \prime \prime }$ and $t ^ { \prime }$ have one common key, we deduce that $t ^ { \prime \prime } = \neg t _ { i } ^ { j - 1 }$ mod 3 and $y = c _ { i } ^ { j - 1 }$ mod 3. Thus, as $( t ^ { \prime } , r ) \in \overline { { \mathsf { w r } } } _ { c _ { i } ^ { j - 1 } }$ mod $^ 3$ , we can conclude that $\mathsf { t r } ( r _ { y } ) = S _ { i } ^ { j - 1 }$ mod 3 and $( t ^ { \prime \prime } , S _ { i } ^ { j - 1 \mathrm { ~ m o d ~ 3 } } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( t ^ { \prime \prime } ) _ { i } }$ . As $t ^ { \prime \prime } \neq t$ , there must exist a transaction $t ^ { \prime \prime \prime }$ s.t. $( t ^ { \prime \prime \prime } , t ^ { \prime \prime } ) \in \mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ belongs to the $\mathsf { p c o } _ { h _ { \omega } ^ { \alpha } }$ -cycle. Again, we observe two cases: $t ^ { \prime \prime \prime }$ is not associated to clause $\underline { { i } }$ : In this case, by an analogous argument, we observe that $\mathtt { v a r } ( t ^ { \prime \prime \prime } ) = \mathtt { v a r } ( t ^ { \prime \prime } )$ . Thus, we denote $\tilde { t } = t ^ { \prime \prime \prime }$ and $\hat { t } = t ^ { \prime \prime }$ . $\underline { t } ^ { \prime \prime \prime }$ is associated to clause $\underline { { i } }$ : By the same reasoning as before, $t ^ { \prime \prime \prime } =$ $t _ { i } ^ { j - 2 }$ mod $^ 3$ and $( t ^ { \prime \prime \prime } , S _ { i } ^ { j - 2 \mathrm { ~ m o d ~ } 3 } ) \in \overline { { \mathsf { W r } } } _ { \mathsf { v a r } ( t ^ { \prime \prime \prime } ) _ { i } }$ . Moreover, as $t ^ { \prime \prime \prime } \neq t$ , there must exist a transaction $t ^ { \prime \prime \prime \prime }$ s.t. $( t ^ { \prime \prime \prime \prime } , t ^ { \prime \prime \prime } ) \in \mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ belongs to the $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } -$ cycle. Moreover, $t ^ { \prime \prime \prime \prime }$ is not associated to clause $i$ , as, once more, we would deduce that t′′′′ = ¬tij−3 mod 3 and that (t′′′′, Sij−3 mod 3) $( t ^ { \prime \prime \prime \prime } , S _ { i } ^ { j - 3 \mathrm { ~ m o d ~ 3 } } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( t ^ { \prime \prime \prime \prime } ) i }$ which is impossible as by the construction of $h _ { \varphi } ^ { \alpha }$ is satisfied. Hence, $t ^ { \prime \prime \prime \prime }$ and $t ^ { \prime \prime \prime }$ belong to different clauses and $\mathsf { v a r } ( t ^ { \prime \prime \prime \prime } ) = \mathsf { v a r } ( t ^ { \prime \prime \prime } )$ . We denote in this case $\tilde { t } = t ^ { \prime \prime \prime \prime }$ and $\hat { t } = t ^ { \prime \prime \prime }$ . Finally, we reach a contradiction with the help of Lemmas 23 and 22. On one hand, by the choice of transactions $\hat { t }$ and $\tilde { t }$ , we know that $\mathtt { v a r } ( \hat { t } ) = \mathtt { v a r } ( \tilde { t } )$ and there exist indices $\tilde { i } , \tilde { j }$ s.t. $\tilde { t }$ deletes $\mathtt { v a r } ( l _ { \tilde { i } } ^ { \tilde { j } } )$ . Moreover, $\hat { t } \neq \lnot _ { t _ { \tilde { i } } ^ { \tilde { j } + 1 } }$ mod 3 as they belong to different clauses. Thus, as $( \tilde { t } , \hat { t } ) \in \mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } \setminus ( \mathsf { s o U } \overline { { \mathsf { w r } } } ) ^ { + }$ , by Lemma 22 we deduce that $( \tilde { t } , S _ { \tilde { i } } ^ { \tilde { j } } ) \in \overline { { \mathsf { w r } } } _ { \mathtt { v a r } ( \tilde { t } ) _ { i } }$ . On the other hand, we also know that there exist indices $\hat { i } , \hat { j }$ s.t. $\hat { t }$ is associated to the literal $l _ { \hat { i } } ^ { \hat { j } }$ and $( \hat { t } , S _ { \hat { i } } ^ { \hat { j } } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( t ) _ { \hat { i } } }$ . Hence, by construction of $h _ { \varphi } ^ { \alpha }$ , as $\mathtt { v a r } ( \hat { t } ) = \mathtt { v a r } ( \tilde { t } )$ , $( \tilde { t } , S _ { \tilde { i } } ^ { \tilde { j } } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( \tilde { t } ) _ { i } }$ and $( \hat { t } , \hat { \dot { \eta } } ) \in \overline { { \mathsf { w r } } } _ { \mathtt { v a r } ( \hat { t } ) _ { \hat { i } } } ,$ we deduce that $\mathtt { s i g n } ( \hat { t } ) \ = \ \mathtt { s i g n } ( \tilde { t } )$ . However, by Lemma 23, we deduce that $( \tilde { t } , \hat { t } ) \not \in \mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } \backslash ( \mathsf { s o U } \overline { { \mathsf { w r } } } ) ^ { + }$ . This contradicts that $( \tilde { t } , \hat { t } ) h _ { \varphi } ^ { \alpha }$ is part of the $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } } -$ cycle. Thus, the initial hypothesis, that $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ is cyclic, is false. In conclusion, $\mathsf { p c o } _ { h _ { \varphi } ^ { \alpha } }$ is acyclic, so $h _ { \varphi }$ is consistent as $h _ { \varphi } ^ { \alpha }$ is a consistent witness of $h _ { \varphi }$ . Lemma 25. For every boolean formula $\varphi$ , if there is a consistent witness of $h$ , then $\varphi$ is satisfiable. Proof. Let $\overline { { h } } = ( T , { \scriptstyle \mathsf { s o } } , \overline { { \mathsf { w r } } } )$ be a consistent witness of $h _ { \varphi }$ . Hence, by Theorem 2, the relation $\mathsf { p c o } _ { \overline { { { h } } } } ~ = ~ \mathrm { S A T U R A T E } \big ( \overline { { { h } } } , ( \mathsf { s o } ~ \cup ~ \overline { { \mathsf { w r } } } ) ^ { + } \big )$ is acyclic. We use this fact to construct a satisfying assignment of $\varphi$ . Let us call $u _ { i } ^ { j }$ to the transaction s.t. $( u _ { i } ^ { j } , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) _ { i } }$ . Note that by construction of $h _ { \varphi }$ , $u _ { i } ^ { j }$ deletes $\mathtt { v a r } ( l _ { i } ^ { j } ) _ { i }$ , so $u _ { i } ^ { j }$ is either $t _ { i } ^ { j }$ or $\neg t _ { i } ^ { j }$ . We first prove that for every pair of pairs of indices $i , i ^ { \prime } , j , j ^ { \prime }$ , if $\mathtt { v a r } ( u _ { i } ^ { j } ) =$ $\mathtt { v a r } ( u _ { i ^ { \prime } } ^ { j ^ { \prime } } )$ then $\mathtt { s i g n } ( u _ { i } ^ { j } ) = \mathtt { s i g n } ( u _ { i ^ { \prime } } ^ { j ^ { \prime } } )$ . By contradiction, let $u _ { i } ^ { j } , u _ { i ^ { \prime } } ^ { j ^ { \prime } }$ be a pair of transactions s.t. $\mathsf { v a r } ( u _ { i } ^ { j } ) = \mathsf { v a r } ( u _ { i ^ { \prime } } ^ { j ^ { \prime } } )$ and $\mathsf { s i g n } ( u _ { i } ^ { j } ) \ne \mathsf { s i g n } ( u _ { i ^ { \prime } } ^ { j ^ { \prime } } )$ . In such case, $\mathsf { o p s i g n } ( u _ { i } ^ { j } ) = \mathsf { s i g n } ( u _ { i ^ { \prime } } ^ { j ^ { \prime } } )$ . Thus, both transactions write $\mathsf { v a r } ( u _ { i } ^ { j } ) _ { ( i ^ { \prime } , i ) } ^ { \mathsf { o p s i g n } ( u _ { i } ^ { j } ) }$ and $\mathsf { v a r } ( u _ { i } ^ { j } ) _ { ( i , i ^ { \prime } ) } ^ { \mathsf { s i g n } ( u _ { i } ^ { j } ) }$ . By axiom Read Committed, as $( u _ { i ^ { \prime } } ^ { j ^ { \prime } } , S _ { i } ^ { j } ) \in \overline { { \mathsf { W r } } } _ { \mathsf { v a r } ( u _ { i } ^ { j } ) ^ { \circ \mathsf { p s i g n } ( u _ { i } ^ { j } ) } }$ wrvar(uij )o(pi′s,iig)n(uij ) and $( u _ { i } ^ { j } , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } }$ , we conclude that $( u _ { i } ^ { j } , u _ { i ^ { \prime } } ^ { j ^ { \prime } } ) \in \mathsf { p c o } _ { \overline { { h } } }$ . By a symmetric argument using $\mathsf { v a r } ( u _ { i } ^ { j } ) _ { ( i , i ^ { \prime } ) } ^ { \mathsf { s i g n } ( u _ { i } ^ { j } ) }$ we deduce that $( u _ { i ^ { \prime } } ^ { j ^ { \prime } } , u _ { i } ^ { j } ) \in \mathsf { p c o } _ { \overline { { h } } }$ . However, this is impossible as $\mathsf { p c o } _ { \overline { { h } } }$ is acylclic; so we conclude that indeed $\mathtt { s i g n } ( u _ { i } ^ { j } ) = \mathtt { s i g n } ( u _ { i ^ { \prime } } ^ { j ^ { \prime } } )$ . Next, we construct a map that assign at each variable in $\varphi$ a value 0 or 1. Let $\alpha _ { h } : \mathsf { V a r s } ( \varphi ) \to \{ 0 , 1 \}$ be the map that assigns for each variable $\mathtt { v a r } ( l _ { i } ^ { j } )$ the value 1 if $\mathsf { s i g n } ( u _ { i } ^ { j } ) = +$ and $0$ if $\mathtt { s i g n } ( u _ { i } ^ { j } ) = -$ . Note that this map is well defined as, by the previous paragraph, if two literals $l _ { i } ^ { j } , l _ { i ^ { \prime } } ^ { j ^ { \prime } }$ share variable, then their respective transactions $u _ { i } ^ { j } , u _ { i ^ { \prime } } ^ { j ^ { \prime } }$ have the same sign. Finally, we prove that $\varphi$ is satisfied with this assignment. By construction of $\alpha _ { h }$ , for every pair of indices $i , j$ , $l _ { i } ^ { j } [ \alpha _ { h } ( \mathsf { v a r } ( l _ { i } ^ { j } ) ) / \mathsf { v a r } ( l _ { i } ^ { j } ) ]$ is true if and only if $( t _ { i } ^ { j } , S _ { i } ^ { j } ) \in \overline { { \mathsf { w r } } } _ { \mathsf { v a r } ( l _ { i } ^ { j } ) }$ . Moreover, as $\mathsf { p c o } _ { \overline { { h } } }$ is acyclic, by Lemma 21, we know that for each $i$ there exists a $j$ s.t. $u _ { i } ^ { j } \neq \neg t _ { i } ^ { j }$ . Hence, for this $j$ , $u _ { i } ^ { j }$ must be $t _ { i } ^ { j }$ as $u _ { i } ^ { j }$ is either $t _ { i } ^ { j }$ or $\neg t _ { i } ^ { j }$ . Therefore, every clause is satisfied using $\alpha _ { h }$ as assignment; so $\varphi$ is satisfiable. # B.4 Proof of Theorem 5. Theorem 5. Let h be a client history whose isolation configuration is defined using {SER, SI, PC, RA, RC}. Algorithm $\mathcal { B }$ returns true if and only if h is consistent. The proof of Theorem 5 is a consequence of Lemmas 27 and 30. Lemma 26. Let $h = ( T , { \sf s o } , { \sf w r } )$ be a client history, $P = ( T _ { P } , M _ { P } )$ be a consistent prefix of $h$ and $t \in T \backslash T _ { P }$ . If $( P \cup \{ t \} ) \in$ seen then exploreConsistentPrefixes $( h , P \cup \{ t \} )$ returns false. Proof. If $( P \cup \{ t \} ) \in { \bf s } { \mathfrak { e } }$ een, then $P \cup \{ t \}$ has been to seen added at line 6 of Algorithm 4. To execute such instruction, the condition at line 4, exploreConsistentPrefixes $( h , P \cup \{ t \} )$ returns true, does not hold; which let us conclude the result. □ Lemma 27. Let $h = ( T , \mathsf { s o } , \mathsf { w r } )$ be a client history whose isolation configuration is stronger than RC. If $h$ is consistent, Algorithm 3 returns true. Proof. Let $h$ be a consistent history that satisfies the hypothesis of the Lemma. As $h$ is consistent, let $\overline { { h } } = ( T , s \circ , \overline { { \mathsf { w r } } } )$ be a witness of $h$ and let $\xi = ( { \overline { { h } } } , \mathsf { c o } )$ be a consistent execution of $\overline { { h } }$ . We first reduce the problem to prove that Algorithm 4 returns true on a particular witness of $h$ , a history $\hat { h }$ s.t. $h \subseteq { \hat { h } } \subseteq { \overline { { h } } }$ . First, let ${ \mathsf { p c o } } , E _ { h }$ and $X _ { h }$ be defined as in Algorithm 3 at lines 2-4. As $\overline { { h } }$ is consistent, for every read event $r$ and a variable $x$ s.t. $\mathsf { w r } _ { x } ^ { - 1 } ( r )$ , $\overline { { \mathsf { w r } } } _ { x } ^ { - 1 } ( r ) \downarrow$ and $\mathsf { W H E R E } ( r ) ( \mathsf { v a l u e } _ { \mathsf { w r } } ( t _ { x } ^ { r } , x ) ) = 0$ ; where $t _ { x } ^ { r } = \overline { { \mathsf { w r } } } _ { x } ^ { - 1 } ( r )$ . On one hand, if $E _ { h }$ is empty, $X _ { h }$ is empty as well. In such case, we denote $\hat { h } = h$ . On the other hand, if $E _ { h } \neq \emptyset$ , for every $( r , x ) \in E _ { h }$ we know that the transaction $t _ { x } ^ { r }$ belongs to $0 _ { x } ^ { r }$ . Therefore, $X _ { h } \neq \emptyset$ . Thus, let $f$ be the map that assigns for every pair $( r , x ) \in E _ { h }$ the transaction $t _ { x } ^ { r }$ ; and let $\boldsymbol { \hat { h } } = ( T , \mathsf { s o } , \hat { \mathsf { w r } } )$ be the history s.t. ${ \hat { h } } \ = \ h \bigoplus _ { ( r , x ) \in E _ { h } } \operatorname { w r } _ { x } ( f ( r , x ) , r )$ . We observe that the fact that $\hat { h }$ is a history and $\bar { h }$ witnesses $\hat { h }$ ’s consistency using co is immediate as wr $\subseteq$ wˆr $\subseteq \mathsf { \overline { { w r } } }$ . Note that in both cases, the condition at line 6 does not hold. Therefore, to prove that Algorithm 3 returns true it suffices to prove that exploreConsistentPrefixes $( \tilde { \hat { h } } , \emptyset )$ returns true. We define an inductive sequence of prefixes based on co and show that they represent recursive calls to Algorithm 4. As a base case, let $P _ { 0 }$ be the prefix with only init as transaction. Assuming that for every $j , 0 \le j \le i$ , $P _ { i }$ is defined, let $P _ { i + 1 } = P _ { i } \cup \{ t _ { i } \}$ ; where $t _ { i }$ is the $i$ -th transaction of $T$ according to co. By construction of co, $\mathsf { p c o } \subseteq \mathsf { c o }$ . Hence, Property 1 immediately holds. Moreover, as co witnesses $\overline { { h } }$ ’s consistency, Property 2 also holds; so $P _ { i } \triangleright _ { t _ { i + 1 } } P _ { i + 1 }$ . We conclude showing by induction on the number of transactions that are not in the prefix that for every $i , 0 \le i \le | T |$ , exploreConsistentPrefixes $( \hat { h } , P _ { i } )$ returns true. Base case: The base case is $i = \left| T \right|$ . In such case, $P _ { | T | }$ contains all transactions in $T$ . Therefore, the condition at line 2 in Algorithm 4 holds and the algorithm returns true. – Inductive case: The inductive hypothesis guarantees that for every $k , i \leq$ $k \ \leq \ | T |$ , exploreConsistentPrefixes $( \check { \hat { h } } , P _ { i } )$ returns true and we show that exploreConsistentPrefixes $( \hat { h } , P _ { i - 1 } )$ also returns true. By definition of $P _ { i }$ , $T _ { P _ { i } } = T _ { P _ { i - 1 } } \cup \{ t _ { i } \}$ . In particular, $\left| P _ { i } \right| \neq \left| T \right|$ and $P _ { i - 1 } \triangleright _ { t _ { i + 1 } } P _ { i }$ . In addition, by induction hypothesis, we know that exploreConsistentPrefixes $( \hat { h } , \check { P } _ { i } )$ returns true. Hence, by Lemma 26, $P _ { i } \notin$ seen. Altogether, we deduce that exploreConsistentPrefixes $( \hat { h } , P _ { i - 1 } )$ returns true. Lemma 28. Let $\hat { h } = ( T , { \sf s o } , \hat { \sf w } { \sf r } )$ be a client history and $P = ( T _ { P } , M _ { P } )$ be $a$ consistent prefix. If exploreConsistentPrefixes $( \hat { h } , P )$ returns true, there exist distinct transactions $t _ { i } ~ \in ~ T ~ T _ { P }$ and a collection of consistent prefixes $P _ { i } = ( T _ { P } , M _ { P } )$ s.t. $P _ { i } = P _ { i - 1 } \cup \{ t _ { i } \}$ , $P _ { i - 1 } \triangleright _ { t _ { i } } P _ { i }$ and exploreConsistentPrefixes $( \hat { h } , P _ { i } )$ returns true; where $| T _ { P } | < i \leq | T |$ and $P _ { | T _ { P } | } = P$ . Proof. Let $\hat { h }$ be a client history and $P = ( T _ { P } , M _ { P } )$ be a consistent prefix s.t. exploreConsistentPrefixes $( \hat { h } , P )$ returns true. We prove the result by induction on the number of transactions not present in $T _ { P }$ . The base case, when $| T _ { P } | = | T |$ , immediately holds as $T \setminus T _ { P } = \emptyset$ . Let us assume that the inductive hypothesis holds for any prefix containing $k$ transactions and let us show that it also holds for every consistent prefix with $k - 1$ transactions. Let us thus assume that $| T _ { P } | = k - 1$ . As exploreConsistentPrefixes $( \hat { h } , P )$ returns true, it must reach line 5 in Algorithm 4. Hence, there must exist a transaction $t _ { k } \in T \backslash T _ { P }$ s.t. $P \triangleright _ { t _ { k } } ( P \cup \{ t _ { k } \} )$ and exploreConsistentPrefixes $( \hat { h } , P \cup \{ t _ { k } \} )$ returns true. By induction hypothesis on $P \cup \{ t _ { k } \} = \left( T _ { k } , M _ { k } \right)$ , there exist a distinct transactions $t _ { i } \in T \backslash T _ { k }$ and a collection consistent prefixes $P _ { i }$ s.t. $P _ { i } = P _ { i - 1 } \cup \{ t _ { i } \}$ , $P _ { i - 1 } \triangleright _ { t _ { i } } P _ { i }$ and exploreConsistentPrefixes $( \hat { h } , P _ { i } )$ returns true; where $k < i \leq | T |$ and $P _ { k } = P \cup \{ t _ { k } \}$ . Thus, the inductive step holds thanks to prefix $P _ { k }$ . □ Lemma 29. Let $h = ( T , { \mathsf { s o } } , { \mathsf { w r } } )$ be a client history and let pco be the relation defined as at line $\mathcal { Z }$ in Algorithm 4. If checkConsistency $( h )$ returns true, there exists an extension $\boldsymbol { \hat { h } } = ( T , \mathsf { s o } , \hat { \mathsf { w r } } )$ of h s.t. for every read event $r$ , variable $x$ and transaction $t$ , $( 1 )$ if $( t , r ) \in \hat { \mathsf { w r } } _ { x } \setminus \mathsf { w r } _ { x }$ then $t \in 0 _ { x } ^ { r } ( \mathsf { p c o } )$ , (2) if $\hat { \mathsf { w r } } _ { x } ^ { - 1 } ( r ) \uparrow$ then $1 _ { x } ^ { r } ( \mathsf { p c o } ) = \emptyset$ , and (3) exploreConsistentPrefixes $( \hat { h } , \varnothing )$ returns true. Proof. Let $h = ( T , \mathsf { s o } , \mathsf { w r } )$ be a client history s.t. checkConsistency $( h )$ returns true and let $\mathsf { p c o } , E _ { h } , X _ { h }$ be the objects described in lines 2-4 in Algorithm 3. If there exists a pair $( r , x ) \in E _ { h }$ for which $0 _ { x } ^ { r } ( \mathsf { p c o } ) = \emptyset$ , checkConsistency $( h )$ returns false. Hence, $E _ { h }$ is empty if and only if $X _ { h }$ is empty. If $E _ { h } = \emptyset$ , Algorithm 3 executes line 7. Thus, taking ${ \hat { h } } = h$ , conditions (1), (2) and (3) trivially hold. Otherwise, Algorithm 3 executes line 8. Once again, as exploreConsistentPrefixes $( h , \emptyset )$ returns true, there must exists $f \in X _ { h }$ s.t. exploreConsistentPrefixes $( \hat { h } , \varnothing )$ returns true; where ${ \hat { h } } = \bigoplus _ { ( r , x ) \in E _ { h } } { \mathsf { W } } \mathsf { r } _ { x } { \big ( } f ( r , x ) , r { \big ) }$ . Thanks to the definition of $f$ and $\hat { h }$ conditions (1), (2) and (3) are satisfied. □ Lemma 30. Let $h = ( T , { \sf s o } , { \sf w r } )$ be a client history whose isolation configuration is composed of $\{ \mathtt { S E R } , \mathtt { S I } , \mathtt { P C } , \mathtt { R C } \}$ isolation levels. If Algorithm 4 returns true, $h$ is consistent. Proof. Let $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ be a client history s.t. checkConsistency $( h )$ returns true and let ${ \mathsf { p c o } } , E _ { h }$ and $X _ { h }$ be defined as at lines 2-4 in Algorithm 3. By Lemma 29, there exists an extension of $\textit { h } \hat { h } = ( T , { \tt s o } , \hat { \sf w } { \sf r } )$ s.t. for every read event $r$ , variable $x$ and transaction $t$ , (1) if $\hat { \mathsf { w r } } _ { x } ^ { - 1 } ( r ) \uparrow$ then $1 _ { x } ^ { r } ( \mathsf { p c o } ) = \emptyset$ , (2) if $( t , r ) \in \hat { \mathsf { w r } } _ { x } \setminus \mathsf { w r } _ { x }$ then $t \in 0 _ { x } ^ { r } ( \mathsf { p c o } )$ and (3) exploreConsistentPrefixes $( \hat { h } , \varnothing )$ returns true. By Lemma 28 applied on $\hat { h }$ and $\varnothing$ , there exist distinct transactions $t _ { i } \in T$ and a collection of prefixes of $h$ , $P _ { i } = ( T _ { i } , M _ { i } )$ , s.t. $P _ { i } = P _ { i - 1 } \cup \{ t _ { i } \}$ , $P _ { i - 1 } \mathsf { D } _ { t _ { i } } P _ { i }$ and exploreConsistentPrefixes $( \hat { h } , P _ { i } )$ returns true; where $P _ { 0 } = \varnothing$ and $0 < i \leq | T |$ . Let co be the total order based on the aforementioned transactions $t _ { i }$ , i.e. $\mathsf { c o } = \{ ( t _ { i } , t _ { j } ) \ | \ i < j \}$ . We construct a full history that extends $\hat { h }$ employing co and taking into account the isolation level of each transaction. For every read event $r$ , key $x$ and visibility relation $\mathsf { v } \in \mathsf { v i s } ( \mathsf { i s o } ( h ) ( \mathsf { t r } ( r ) ) )$ , let $t _ { \vee } , t _ { x } ^ { r }$ be the transactions defined as follows: $$ \begin{array} { r } { t _ { \mathrm { v } } ^ { x } = \displaystyle \operatorname* { m a x } _ { \mathrm { c o } } \{ t ^ { \prime } \in T \mid t ^ { \prime } \mathrm { ~ w r i t e s ~ } x \mathrm { ~ } \wedge \mathrm { ~ } \mathsf { v } ( \mathrm { c o } ) ( t ^ { \prime } , r , x ) \} } \\ { t _ { x } ^ { r } = \displaystyle \operatorname* { m a x } _ { \mathrm { c o } } \{ t _ { \mathrm { v } } ^ { x } \mid \mathsf { v } \in \mathsf { v i s } ( \mathrm { i s o } ( h ) ( \mathsf { t r } ( r ) ) ) \} } \end{array} $$ Note that if $\mathsf { v }$ is a visibility relation associated to an axiom from SER, SI, PC, RA and RC isolation levels, transactions $t _ { \mathrm { v } } ^ { x }$ and $t _ { x } ^ { r }$ are well-defined as $\mathsf { v } ( \mathrm { i n i t } , r , x )$ holds. Thus, let $\overline { { \mathbf { w } \mathbf { r } } } _ { x } = \hat { \mathbf { w } } \mathbf { r } _ { x } \cup \{ ( t _ { x } ^ { r } , r ) \mid \hat { \mathbf { w } } \mathbf { r } _ { x } ^ { - 1 } ( r ) \uparrow \}$ and $\begin{array} { r } { \overline { { \mathsf { W } \mathsf { r } } } = \bigcup _ { x \in \mathsf { K e y s } } \overline { { \mathsf { W } \mathsf { r } } } _ { x } } \end{array}$ . As $\overline { { \mathsf { W } \mathsf { r } } } _ { x } ^ { - 1 }$ is a total function and $\overline { { \mathsf { W } \mathsf { r } } } _ { x } ^ { - 1 } ( r )$ writes $x$ we can conclude that $\overline { { h } } = ( T , { \sf s o } , \overline { { \sf w r } } )$ is a full history. We prove that $\overline { { h } }$ is also a witness of $h$ . For that, we show that for every read event $r$ , every key $x$ and every transaction $t$ , if $( t , r ) \in \overline { { \mathsf { w r } } } _ { x } \backslash \mathsf { w r } _ { x }$ , $t \in 0 _ { x } ^ { r } \big ( \mathrm { p c o } \big )$ . Two cases arise: $( t , r ) \in \hat { \mathsf { w r } } _ { x } \backslash \mathsf { w r } _ { x }$ and $( t , r ) \in \overline { { \mathsf { w r } } } _ { x } \backslash \hat { \mathsf { w r } } _ { x }$ . The first case is quite straightforward, as if $( t , r ) \in \hat { \mathsf { w r } } _ { x } \setminus \mathsf { w r } _ { x }$ , by Property (1) of Lemma 29, $t \in 0 _ { x } ^ { r } ( \mathsf { p c o } )$ . The second case, $( t , r ) \in \overline { { \mathsf { w r } } } _ { x } \setminus \hat { \mathsf { w r } } _ { x }$ , is slightly more subtle. First, for every isolation level considered, if $( t , r ) \in \overline { { \mathsf { w r } } } _ { x }$ then $( t , \mathsf { t r } ( r ) ) \ \in$ pco. Next, as checkConsis$\mathrm { T E N C Y } ( h )$ returns true, the condition at line 5 does not hold. Hence, as pco is acyclic, we deduce that $( \mathsf { t r } ( r ) , t ) \ \notin$ pco. In addition, as $( t , r ) \in \overline { { \mathsf { w r } } } _ { x } \setminus \hat { \mathsf { w r } } _ { x }$ , $\hat { \mathsf { w r } } _ { x } ^ { - 1 } ( r ) \uparrow$ . By Property (2) of Lemma 29 employed during $\hat { h }$ ’s construction, we deduce that $1 _ { x } ^ { r } ( \mathsf { p c o } ) = \emptyset$ . In conclusion, as $( { \sf t r } ( r ) , t ) \notin \sf p c o$ and $1 _ { x } ^ { r } ( \mathsf { p c o } ) = \emptyset$ , we conclude that $t \in 0 _ { x } ^ { r } ( \mathsf { p c o } )$ . Finally, we prove that co witnesses that $\overline { { h } }$ is consistent. Let $r$ be a read event, $x$ be a key and $t _ { 1 } , t _ { 2 }$ be transactions s.t. $( t _ { 1 } , r ) \in \overline { { \mathsf { w r } } } _ { x }$ and $t _ { 2 }$ writes $x$ . We prove that if there exists $\mathsf { v } \in \mathsf { v i s } ( \mathsf { i s o } ( \overline { { h } } ) ( \mathsf { t r } ( r ) ) )$ s.t. $\mathbf { v } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\overline { { h } }$ then $( t _ { 2 } , t _ { 1 } ) \in \mathrm { c o }$ ; which by Definition 6, we know it implies that $\overline { { h } }$ is consistent. Note that if $( t _ { 1 } , r ) \in \overline { { \mathsf { w r } } } \setminus \hat { \mathsf { w r } }$ , by definition of $t _ { x } ^ { r }$ the statement immediately holds; so we can assume without loss of generality that $( t _ { 1 } , r ) \in \hat { \mathsf { w r } } _ { x }$ . First, we note that proving that whenever $\mathsf { v } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\overline { { h } }$ , then $( t _ { 2 } , t _ { 1 } ) \in \mathrm { c o }$ is equivalent to prove that whenever $\mathbf { v } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\overline { { h } }$ , then $t _ { 1 } \notin T _ { i - 1 }$ ; where $i$ is the index of the transaction in $T$ s.t. $t _ { 2 } = t _ { i }$ . For every $i , 1 \le i \le | T | \ P _ { i - 1 } \triangleright _ { t _ { i } } P _ { i }$ , so wˆr $\subseteq \mathsf { c o }$ . Thus, by Definition 6, it suffices to show that for every read event $r$ , $C _ { \mathsf { i s o } ( h ) ( \mathsf { t r } ( r ) ) } ( \mathsf { p c o } ) ( r )$ holds. For that, let $\mathsf { p } \mathsf { \hat { c } } \mathsf { o } = \mathsf { F } \mathsf { I } \mathsf { X } ( \lambda R : \mathsf { S A T U R A T E } ( \hat { h } , R ) ) ( \mathsf { s o U } \hat { \mathsf { w } } \mathsf { r } ) ^ { + }$ be the partial commit order implied by $\hat { h }$ . As $\mathfrak { i } \mathfrak { s o } ( h )$ is composed of $\{ \mathtt { S E R } , \mathtt { S I } , \mathtt { P C } , \mathtt { R A } , \mathtt { R C } \}$ isolation levels and $P _ { i - 1 } \triangleright _ { t _ { 2 } } P _ { i }$ , by Property 2 of Definition 10, it suffices to prove that whenever ${ \mathsf { v } } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ , if $v \neq$ Conflict then $v ( \mathsf { p c o } _ { t _ { 2 } } ^ { P _ { i } } ) ( t , r , x )$ holds in $\hat { h }$ , while if $v = \mathsf { C o n f l i c t }$ , that there exists $t ^ { \prime } \in T _ { i - 1 }$ s.t. $v ( \mathsf { p c o } _ { t _ { 2 } } ^ { P _ { i } } ) ( t ^ { \prime } , r , x )$ holds in $\hat { h }$ ; where $\mathsf { p } \mathsf { \hat { c } o } _ { t _ { 2 } } ^ { P _ { i } }$ is obtained by applying Table 1 on pˆco. We analyze five different cases: $- \ \mathsf { i s o } ( \overline { { h } } ) ( \mathsf { t r } ( r ) ) = \mathsf { S E R }$ : In this case, Serializability $( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\overline { { h } }$ if and only if $( t _ { 2 } , \mathsf { t r } ( r ) ) \ \in \ \mathsf { c o }$ . As $\mathsf { p } \hat { \mathsf { c } } \mathsf { o } _ { t _ { 2 } } ^ { P _ { i } }$ totally orders $t _ { 2 }$ and every other transaction in $T$ and $\mathsf { p } \hat { \mathsf { c } } \mathsf { o } _ { t _ { 2 } } ^ { P _ { i } } \ \subseteq \ \mathsf { c } \mathsf { o }$ , we deduce that $( t _ { 2 } , \mathsf { t r } ( r ) ) \ \in \ \mathsf { p a s } _ { t _ { 2 } } ^ { P _ { i } }$ . Hence, Serializability $\mathsf { \Pi } ^ { \prime } \mathsf { p } \hat { \mathsf { c } } \mathsf { o } _ { t _ { 2 } } ^ { P _ { i } } \bigr ) ( t _ { 2 } , r , x )$ holds in $\hat { h }$ . $- \ \mathrm { i } \mathsf { s o } ( \overline { { h } } ) ( \mathsf { t r } ( r ) ) = \mathsf { S I }$ : Two disjoint sub-cases arise: $\mathsf { C o n f l i c t } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\overline { { h } }$ : This happens if and only if there exists a transaction $t _ { 3 }$ and a key $\overline { { y \in \mathsf { K } } } \mathsf { e y s }$ s.t. $t _ { 3 }$ writes $y$ , tr $\cdot ( r )$ writes $y$ , $( t _ { 2 } , t _ { 3 } ) \in$ ${ \mathsf { c o } } ^ { * }$ and $( t _ { 3 } , \mathsf { t r } ( r ) ) \in \mathsf { c o }$ . Let $j$ be the index s.t. $t _ { 3 } = t _ { j }$ . Then, as $\mathsf { p } \hat { \mathsf { c } } \mathsf { o } _ { t _ { 3 } } ^ { P _ { j } }$ totally orders $t _ { 3 }$ and every other transaction and $\mathsf { p } \hat { \mathsf { c } } \mathsf { o } _ { t _ { 3 } } ^ { P _ { j } } \subseteq \mathsf { c } \mathsf { o }$ , $( t _ { 2 } , t _ { 3 } ) \in$ $( \mathsf { p } \hat { \mathsf { c o } } _ { t _ { 3 } } ^ { P _ { j } } ) ^ { * }$ and $( t _ { 3 } , \mathsf { t r } ( r ) ) \in \mathsf { p a } _ { t _ { 3 } } P _ { j }$ . Thus, Conflict $\big ( \mathsf { p } \hat { \mathsf { c o } } _ { t _ { 3 } } ^ { P _ { j } } \big ) \big ( t _ { 2 } , r , x \big )$ holds in $\hat { h }$ . $\mathsf { P r e f i x } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\overline { { h } }$ but $\mathsf { C o n f l i c t } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ does not: We observe that $\mathsf { P r e f i x } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\overline { { h } }$ if there exists a transaction $t _ { 3 }$ s.t. $( t _ { 2 } , t _ { 3 } ) \in \mathsf { c o } ^ { * }$ and $( t _ { 3 } , \mathsf { t r } ( r ) ) \in \mathsf { s o } \cup \overline { { \mathsf { w r } } }$ . If $( t _ { 3 } , \mathsf { t r } ( r ) ) \in \overline { { \mathsf { w r } } } \setminus ( \mathsf { s o } \cup \hat { \mathsf { w r } } )$ , by Equation (10) there exist $y \in { \mathsf { K e y s } }$ and $v \in \mathsf { v i s } ( \mathsf { S I } )$ s.t. $v ( \mathsf { c o } ) ( t _ { 3 } , r , y )$ holds in $\hat { h }$ . Note that $v \ne$ Conflict as otherwise $\mathsf { C o n f l i c t } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ would hold in $\overline { { h } }$ . Hence, $\textit { \textbf { v } } =$ Prefix and by transitivity of co, we conclude that $\mathsf { P r e f i x } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\hat { h }$ . As $\mathsf { p } \hat { \mathsf { c } } \mathsf { o } _ { t _ { 2 } } ^ { P _ { i } }$ totally orders $t _ { 2 }$ with respect every other transaction in $t _ { 2 }$ and $\mathsf { p } \hat { \mathsf { c o } } _ { t _ { 2 } } ^ { P _ { i } } \subseteq \mathsf { c o }$ , we conclude that Prefix $\big ( \mathsf { p } \hat { \mathsf { c o } } _ { t _ { 2 } } ^ { P _ { i } } \big ) \big ( t _ { 2 } , r , x \big )$ holds in $\hat { h }$ . $- \ \mathsf { i s o } ( \overline { { h } } ) ( \mathsf { t r } ( r ) ) = \mathtt { P C }$ : In this case, $\mathsf { P r e f i x } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\overline { { h } }$ if and only if there exists a transaction $t _ { 3 }$ s.t. $( t _ { 2 } , t _ { 3 } ) \in \mathsf { c o } ^ { * }$ and $( t _ { 3 } , \mathsf { t r } ( r ) ) \ \in \ \mathsf { s o } \ \mathsf { U } \ \overline { { \mathsf { w } \mathsf { r } } }$ . If $( t _ { 3 } , \mathsf { t r } ( r ) ) \in \overline { { \mathsf { w r } } } \setminus ( \mathsf { s o } \cup \hat { \mathsf { w r } } )$ , by Equation (10) there exist $y \in { \mathsf { K e y s } }$ and $v \in \mathsf { v i s } ( \mathsf { S I } )$ s.t. $v ( \infty ) ( t _ { 3 } , r , y )$ holds in $\hat { h }$ . Hence, by transitivity of $\mathtt { C O }$ , we conclude that $\mathsf { P r e f i x } ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\hat { h }$ . As $\mathsf { p } \hat { \mathsf { c } } \mathsf { o } _ { t _ { 2 } } ^ { P _ { i } }$ totally orders $t _ { 2 }$ with respect every other transaction in $t _ { 2 }$ and $\mathsf { p } \hat { \mathsf { c o } } _ { t _ { 2 } } ^ { P _ { i } } \subseteq \mathsf { c o }$ , we conclude that $\mathsf { P r e f i x } \big ( \mathsf { p c o } _ { t _ { 2 } } ^ { P _ { i } } \big ) ( t _ { 2 } , r , x )$ holds in $\hat { h }$ . $- \ \mathrm { i } s \mathsf { o } ( \overline { { h } } ) ( \mathsf { t r } ( r ) ) = \mathtt { R A }$ : In this case, Read Atomic $\cdot ( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\bar { h }$ if and only if $\overline { { ( t _ { 2 } , \mathsf { t r } ( r ) ) \in \mathsf { s o } \cup \mathsf { W } } }$ . We observe that by Equation (10), if $( t _ { 2 } , \mathsf { t r } ( r ) ) \in$ $\overline { { \mathsf { W r } } } \setminus \left( \mathsf { s o U } \hat { \mathsf { w r } } \right)$ , then $t _ { 2 } = t _ { x } ^ { r }$ and Read Atomic $\big ( \mathsf { p } \hat { \mathsf { c o } } _ { t _ { 2 } } ^ { P _ { i } } \big ) \big ( t _ { 2 } , r , x \big )$ holds in $\hat { h }$ . Hence, $( t _ { 2 } , \mathsf { t r } ( r ) ) \in \mathsf { s o U } \mathsf { \Omega }$ wˆr; which is a contradiction. Thus, as $( t _ { 2 } , \mathsf { t r } ( r ) ) \in \mathsf { s o } \cup \mathsf { w i r } ,$ , Read Atomic $\big ( \mathsf { p } \hat { \mathsf { c o } } _ { t _ { 2 } } ^ { P _ { i } } \big ) \big ( t _ { 2 } , r , x \big )$ holds in $\hat { h }$ . – $\mathsf { i s o } ( \overline { { h } } ) ( \mathsf { t r } ( r ) ) = \mathtt { R C }$ : Similarly to the previous case, we observe that the formula Read Committed $( \mathsf { c o } ) ( t _ { 2 } , r , x )$ holds in $\overline { { h } }$ iff $( t _ { 2 } , \mathsf { t r } ( r ) ) \in ( \mathsf { s o } \cup \overline { { \mathsf { w } \mathsf { r } } } ) ; \mathsf { p o } ^ { * }$ . We observe that by Equation (10), if $( t _ { 2 } , \mathsf { t r } ( r ) ) \in \overline { { \mathsf { w r } } } \setminus ( \mathsf { s o } \cup \hat { \mathsf { w r } } ) ; \mathsf { p o } ^ { * } .$ then $t _ { 2 } = t _ { x } ^ { r }$ and Read Committed $( \mathsf { p } \hat { \mathsf { c o } } _ { t _ { 2 } } ^ { P _ { i } } ) ( t _ { 2 } , r , x )$ holds in $\hat { h }$ . Therefore, $( t _ { 2 } , r ) \in$ so wˆr; ${ \mathsf { p o } } ^ { * }$ ; which is a contradiction. Thus, as $( t _ { 2 } , \mathsf { t r } ( r ) ) \ \in \ \mathsf { s o } \ \cup \hat { \mathsf { w r } } ; \mathsf { p o } ^ { * }$ , Read Committed $\big ( \mathsf { p } \hat { \mathsf { c o } } _ { t _ { 2 } } ^ { P _ { i } } \big ) \big ( t _ { 2 } , r , x \big )$ holds in $\hat { h }$ . # B.5 Proof of Theorem 6 Theorem 6. For every client history $h$ whose isolation configuration is composed of {SER, SI, PC, RA, RC} isolation levels, Algorithm 3 runs in $O ( | h | ^ { \# \mathrm { c o n f } ( h ) + \mathsf { w i d t h } ( h ) + 9 } \cdot \mathsf { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } )$ . Moreover, if no transaction employs SI isolation level, Algorithm 3 runs in O(|h|#conf(h)+width(h)+8). The proof of Theorem 6 is split in two Lemmas: Lemma 34 analyzes the complexity of Algorithm 4 while Lemma 35 relies on the previous result to conclude the complexity of Algorithm 3. Lemma 31. Let $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ be a history, $P = ( T _ { P } , M _ { P } )$ be a consistent prefix of h and $t \in T \backslash T _ { P }$ be a transaction. Algorithm $\it 5$ returns true if and only if $P \triangleright _ { t } ( P \cup \{ t \} )$ . Proof. Clearly, $P \cup \{ t \}$ is an extension of $P$ .isConsistentExtension $( h , P , t )$ returns true if and only if conditions at lines 3 and lines 5 in Algorithm 5 hold. This is equivalent to respectively satisfy Properties 1 and 2 of Definition 10. By Definition 10, this is equivalent to $P \triangleright _ { t } ( P \cup \{ t \} )$ . □ Lemma 32. Let $h = ( T , { \sf s o } , { \sf w r } )$ be a history and $k \in \mathbb { N }$ be a bound in $\mathfrak { i } \mathfrak { s o } ( h )$ . For any consistent prefix $P = ( T _ { P } , M _ { P } )$ of $h$ and any transaction $t \in T \setminus T _ { P }$ , Algorithm $5$ runs in $\mathcal { O } ( | h | ^ { k + 3 } )$ . Proof. We analyze the cost of Algorithm 5. First, as $\mathsf { p c o } \subseteq T \times T$ , by Lemma 7, line 2 runs in $\mathcal { O } ( | h | ^ { 2 } \cdot | h | ^ { k + 1 } )$ . Next, the condition at line 3 can be checked in $\mathcal { O } ( \vert T \vert )$ . Finally, the condition at line 5 can be checked in $\mathcal { O } ( | T | \cdot k \cdot U )$ ; where $U$ is an upper-bound on the complexity of checking $\mathsf { v p } _ { v } ^ { P } ( t , r )$ . With the aid of Lemma 6, we deduce that $U \in { \mathcal { O } } ( | h | ^ { k - 2 } )$ . Altogether, we conclude that Algorithm 5 runs in $\mathcal { O } ( | h | ^ { k + 3 } )$ . □ Lemma 33. Let $\boldsymbol { h } \ = \ ( T , \mathsf { s o , w r } )$ be a client history. If $\mathfrak { i } \mathfrak { s o } ( h )$ is composed of SER, SI, PC, RA, RC isolation levels, then 5 is a bound of iso $( h )$ . Moreover, if no transaction has SI as isolation, 4 is a bound on $\mathfrak { i } \mathfrak { s o } ( h )$ . Proof. Let $h$ be a history as described in the hypothesis. First, all isolation levels in the set $\{ \mathtt { S E R } , \mathtt { S I } , \mathtt { P C } , \mathtt { R A } , \mathtt { R C } \}$ employ at most two axioms. Moreover, every axiom described employs at most 5 quantifiers: three universal quantifiers and at most two existential quantifiers. Hence, 5 is a bound on $\mathfrak { i } \mathfrak { s o } ( h )$ . Note that Conflict is the only axiom employing two existential quantifiers; so if no transaction employs SI, 4 bounds $\mathfrak { i } \mathfrak { s o } ( h )$ . □ Lemma 34. Let $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ be a client history whose isolation configuration is composed of $\{ \mathtt { S E R } , \mathtt { S I } , \mathtt { P C } , \mathtt { R A } , \mathtt { R C } \}$ isolation levels. Algorithm 4 runs in $\mathcal { O } ( | h | ^ { \mathsf { w i d t h } ( h ) + 9 } \cdot \mathsf { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } )$ . Moreover, if no transaction has SI as isolation level, Algorithm 4 runs in ${ \mathcal { O } } ( \vert h \vert ^ { \mathrm { w i d t h } ( h ) + 8 } )$ . Proof. For proving the result, we focus only on prefixes that are computable by Algorithm 4. Let $h = ( T , { \sf s o } , { \sf w r } )$ be a history. A prefix $P$ of $h$ is computable if either $P = \emptyset$ or there exist a transaction $t$ and a prefix $P ^ { \prime }$ s.t. $P = P ^ { \prime } \cup \{ t \}$ and $P ^ { \prime }$ is computable. Intuitively, computable prefixes represent recursive calls of Algorithm 4 when employed by Algorithm 3. Indeed, Algorithm 3 only employs Algorithm 4 at lines 7 and 11. In both cases, $P ^ { \prime } = \varnothing$ is the initial call to Algorithm 4. Moreover, the condition at line 3 justifies the recursive definition. On one hand, we observe that any call to Algorithm 4 is associated to a computable prefix and on the other hand, Algorithm 4 does not explore two equivalent computable prefix thanks to the global variable seen (line 4). Therefore, Algorithm 4 runs in $\mathcal { O } ( N { \cdot } U )$ ; where $N$ is the number of distinct equivalence class of prefixes of $h$ and $U$ is an upper-bound on the running time of Algorithm 4 on a fixed prefix without doing any recursive call. We first compute an upper-bound of $N$ . For any computable prefix $P$ , we can deduce by induction on the length of $P$ that there exists transactions $t _ { i } \in T _ { P }$ and a collection of computable prefixes of $h$ , $P _ { i } = ( T _ { i } , M _ { i } )$ and transactions $t _ { i }$ s.t. $P _ { | T _ { P } | } = P$ , $P _ { i } = P _ { i - 1 } \cup \{ t _ { i } \}$ and $P _ { i - 1 } \triangleright _ { t _ { i } } P _ { i }$ ; where $P _ { 0 } = \emptyset$ and $0 < i \leq | T _ { P } |$ . The base case is immediate as $| T _ { P } | = 0$ implies that $T ^ { \prime } = \emptyset$ while the inductive step can be simply obtained by applying the recursive definition of computable prefix. Let $P = ( T _ { P } , M _ { P } )$ be in what follows a computable prefix of $h$ . We observe that both $T _ { P }$ and $M _ { P }$ are determined by its so-maximal transactions. Let $t , t ^ { \prime } \in$ $T$ be a pair of transactions s.t. $( t , t ^ { \prime } ) \in { \mathsf { s o } }$ and $t ^ { \prime } \in T _ { P }$ . As $t ^ { \prime } \in T _ { P }$ there must exist an index $i , 1 \leq i \leq | T _ { P } |$ s.t. $P _ { i } = P _ { i - 1 } \cup \{ t ^ { \prime } \}$ . Therefore, as $P _ { i - 1 } \triangleright _ { t ^ { \prime } } P _ { i - 1 }$ , $t \in T _ { i - 1 } \subseteq T _ { P }$ . In particular, if $t ^ { \prime }$ is a so-maximal transaction in $T _ { P }$ , all its so-predecessors are also contained in $T _ { P }$ ; and hence, $T _ { P }$ can be characterized by its so-maximal transactions. Moreover, by induction on the length of $P$ we can prove that for every key $x$ , $M _ { P } ( x )$ is a so-maximal transaction: the base case, $| T _ { P } | = 0$ is immediate while the inductive step is obtained by the definition of $P \cup \{ t ^ { \prime \prime } \} , t ^ { \prime \prime } \notin T _ { P }$ . Hence, the number of computable prefixes of a history is in ${ \mathcal { O } } ( | T | ^ { \mathsf { w i d t h } ( h ) } \cdot \mathsf { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } )$ . Thus, ${ \cal N } \in { \mathcal { O } } ( | h | ^ { \mathsf { w i d t h } ( h ) } \cdot \mathsf { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } )$ . Moreover, if no transaction employs SI as isolation level, prefixes with identical transaction set coincide. Hence, in such case, $N \in { \mathcal { O } } ( | h | ^ { \mathrm { w i d t h } ( h ) } )$ . We conclude the proof bounding $U$ . If $| T _ { P } | = | T |$ , Algorithm 4 runs in $\mathcal { O } ( 1 )$ ; so we can assume without loss of generality that $| T _ { P } | \ne | T |$ . In such case, $U$ represent the cost of executing lines 3-7 in Algorithm 4. Thus, $U \in { \mathcal { O } } ( ( | T | -$ $| T _ { P } | ) \cdot V )$ ; where $V$ is the cost of checking $P \triangleright _ { t }$ $\mathbf { \mathcal { P } } \cup \{ t \} )$ for a transaction $t \in T \backslash T _ { P }$ . By Lemma 31, Algorithm 5 can check if $P \triangleright _ { t } ( P \cup \{ t \} )$ and thanks to Lemma 32, Algorithm 5 runs in $\mathcal { O } ( | h | ^ { k + 3 } )$ ; where $k$ is a bound on $\mathfrak { i } \mathfrak { s o } ( h )$ . Thus, $U \in { \mathcal { O } } ( | h | ^ { k + 4 } )$ . Thanks to Lemma 33, we conclude that Algorithm 4 runs in $\mathcal { O } ( \vert h \vert ^ { \mathrm { w i d t h } ( h ) + 9 }$ $\mathtt { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } )$ and, if no transaction employs SI as isolation level, then it runs in $\mathcal { O } ( \vert h \vert ^ { \mathrm { w i d t h } ( h ) + 8 } )$ . □ Lemma 35. Let $\boldsymbol { h } = ( T , \mathsf { s o } , \mathsf { w r } )$ be a client history whose isolation configuration is composed of $\{ \mathtt { S E R } , \mathtt { S I } , \mathtt { P C } , \mathtt { R A } , \mathtt { R C } \}$ isolation levels. Algorithm $\mathcal { B }$ runs in ${ \mathcal { O } } ( | h | ^ { \# \mathrm { c o n f } ( h ) + \mathrm { w i d t h } ( h ) + 9 } \cdot \mathrm { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } ) .$ . Moreover, if no transaction has SI as isolation level, Algorithm 3 runs in O(|h|#conf(h)+width(h)+8). Proof. Let $h = ( T , { \sf s o } , { \sf w r } )$ be a history satisfying the hypothesis of the Lemma. We decompose our analysis in two sections, the first one where we analyze the complexity of executing lines 2-4 and second one where we analyze the complexity of executing lines 5-12. We observe that by Lemma 33, 5 is a bound on $\mathfrak { i } \mathfrak { s o } ( h )$ . In line 2, Algorithm 3 computes pco. On one hand, computing $( \mathsf { s o U } \mathsf { w r } ) ^ { + }$ is in $\mathcal { O } ( | T | ^ { 3 } )$ . On the other hand, as $\mathsf { p c o } \subseteq T \times T$ and by Lemma 7, executing saturate $( h , ( \mathsf { s o } \cup \mathsf { w r } ) ^ { + } )$ is in $\mathcal { O } ( | h | ^ { 6 } )$ ; we deduce that computing pco after compute $( \mathsf { s o U w r } ) ^ { + }$ is in $\mathcal { O } ( | h | ^ { 8 } )$ . In line 3, Algorithm 3 computes $E _ { h }$ . As wr is acyclic, for a given key $x$ and transaction $t$ , $\mathtt { v a l u e } _ { \mathrm { w r } } ( t , x ) \in \mathcal { O } ( | T | )$ . Therefore, computing $1 _ { x } ^ { r } { \big ( } { \mathsf { p c o } } { \big ) }$ is in $\mathcal { O } ( \vert T \vert )$ as we assume that for every $r \in \mathsf { R o w s }$ , computing $\mathsf { W H E R E } ( r ) ( v ) \in \mathcal { O } ( 1 )$ . Thus, computing $E _ { h }$ is in $\mathcal { O } ( | h | ^ { 3 } )$ . Finally, in line 4, Algorithm 3 computes $X _ { h }$ . Note that $X _ { h }$ can be seen is a $\mathsf { X } _ { ( r , x ) \in E _ { h } } \mathsf { O } _ { x } ^ { r } \big ( \mathsf { p c o } \big )$ . Computing each $0 _ { x } ^ { r } ( \mathsf { p c o } )$ set is in $\mathcal { O } ( \vert T \vert )$ ; so computing all of them is in $\mathcal { O } ( | T | \cdot | E _ { h } | )$ . As each set $0 _ { x } ^ { r } ( \mathsf { p c o } )$ is a subset of $T$ , applying the cartesian-product definition of $X _ { h }$ we can compute $X _ { h }$ in $\mathcal { O } ( | T | ^ { | E _ { h } | } )$ . Therefore, as $| E _ { h } | = \# \mathsf { c o n f } ( h )$ , we conclude that computing $X _ { h }$ is in ${ \mathcal { O } } ( | h | \cdot \# \mathsf { c o n f } ( h ) +$ $| h | ^ { \# \mathrm { c o n f } ( h ) } )$ and that $| X _ { h } | \in \mathcal { O } ( | h | ^ { \# \mathrm { c o n f } ( h ) } )$ . Altogether, as $\# \mathsf { c o n f } ( h ) \leq | T | ^ { 2 }$ , we deduce that computing lines 2-4 of Algorithm 3 is in $\mathcal { O } ( \vert h \vert ^ { 8 } + \vert h \vert ^ { \# \mathrm { c o n f } ( h ) } )$ . Next, we analyze the complexity of executing lines 5-12. Four disjoint cases arise, one per boolean condition in Algorithm 3. The first one, checking if pco is cyclic (line 5), is in $\mathcal { O } ( | h | )$ . The second one, checking if $\exists ( r , x ) \in E _ { h }$ s.t. $0 _ { x } ^ { r } { \big ( } \mathsf { p c o ) } =$ $\varnothing$ (line 6), clearly runs in ${ \mathcal { O } } ( \# \mathsf { c o n f } ( h ) \cdot | h | )$ . The third one, checking if $E _ { h } = \varnothing$ and executing Algorithm 4 is in $\mathcal { O } ( \# \mathsf { c o n f } ( h ) + | h | ^ { \mathsf { u d t h } ( h ) + 9 } \cdot \mathsf { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } )$ thanks to Lemma 34. Finally we analyze the last case, computing an extension of $h$ for each mapping in $X _ { h }$ and then executing Algorithm 3 (lines 8-12). On one hand, computing each history is in $\mathcal { O } ( | h | ^ { 3 } )$ as we require to define both ${ \mathsf { s o } } \subseteq T \times T$ and wr $\subseteq$ Keys $. { \times } T { \times } T$ . On the other hand, as the size of each extension of $h$ is in $\mathcal { O } ( | h | )$ , executing Algorithm 3 for a given history is in ${ \mathcal { O } } ( | X _ { h } | \cdot | h | ^ { \mathrm { { w i d t h } } ( h ) + 9 } \cdot \mathrm { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } )$ thanks again to Lemma 34. Altogether, for each mapping $f \in X _ { h }$ , executing lines 10-11 is in ${ \mathcal { O } } ( | h | ^ { \mathsf { w i d t h } ( h ) + 9 } \cdot \mathsf { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } )$ . As $| X _ { h } | \in \mathcal { O } ( | h | ^ { \# \mathrm { c o n f } ( h ) } )$ , we conclude that executing this last case is in $\mathcal { O } ( \vert h \vert ^ { \# \mathrm { c o n f } ( h ) + \mathrm { w i d t h } ( h ) + 9 } \cdot \mathrm { w i d t h } ( h ) ^ { \vert \mathsf { K e y s } \vert } )$ . We then conclude that Algorithm 3 runs in $\mathcal { O } ( | h | ^ { \# \mathrm { c o n f } ( h ) + \mathrm { w i d t h } ( h ) + 9 }$ $\mathtt { w i d t h } ( h ) ^ { | \mathsf { K e y s } | } \big )$ . Moreover, if no transaction employs SI as isolation level, Lemma 34 allows us to deduce that in such case Algorithm 3 runs in $\mathcal { O } ( | h | ^ { \# \mathrm { c o n f } ( h ) + \mathsf { w i d t h } ( h ) + 9 } )$ . □
Concurrent accesses to databases are typically grouped in transactions which define units of work that should be isolated from other concurrent computations and resilient to failures. Modern databases provide different levels of isolation for transactions that correspond to different trade-offs between consistency and throughput. Quite often, an application can use transactions with different isolation levels at the same time. In this work, we investigate the problem of testing isolation level implementations in databases, i.e., checking whether a given execution composed of multiple transactions adheres to the prescribed isolation level semantics. We particularly focus on transactions formed of SQL queries and the use of multiple isolation levels at the same time. We show that many restrictions of this problem are NP-complete and provide an algorithm which is exponential-time in the worst-case, polynomial-time in relevant cases, and practically efficient.
[ "cs.DB", "cs.PL" ]
# 1 Introduction Software testing is a critical activity throughout the software development life cycle, playing a key role in early defect detection and compliance with requirements [14, 30]. In particular, testing driven by requirements is conducted to validate systems before their deployment [13]. This is especially critical for systems that must demonstrate functional safety, such as automotive systems. In such contexts, it is also common to specify test cases in natural language (following templates) to facilitate their analysis and traceability to requirements, before manually writing the corresponding test code. However, due to the increasing complexity of systems and their continuous change by large teams, test suites are hard to maintain, tend to grow, and are often redundant (i.e., some of the test cases are likely to detect the same faults), thus leading to waste of time and resources [37]. Further, in practice, testers have limited resources and time to test new system updates. To address this, given a test budget, test suite minimization (TSM) is proposed to prune test cases that are most likely to be redundant, while satisfying coverage criteria and maximizing fault detection [17, 41], thus significantly reducing testing cost and time while limiting the impact on system quality or compliance. While a number of approaches have been proposed for TSM [3, 10, 11, 18, 24, 26, 29, 32–34, 40, 42–44], most existing techniques rely on code and structural coverage criteria (e.g., statement or branch coverage). These techniques do not address the unique challenges of minimizing test suites driven by system requirements, with test cases often specified in natural language. Moreover, to the best of our knowledge, no existing solution is capable of simultaneously maintaining full requirement coverage and adhering to a fixed minimization budget, a common situation in industrial practice due to limited time and resources. To bridge this gap, we propose RTM (Requirements coverage-guided Test suite Minimization), a novel approach specifically designed for requirement-based testing. RTM aims to reduce test suite redundancy while ensuring full requirement coverage, making it particularly well-suited to meet the practical needs of our industrial partners, and critical systems that must demonstrate functional safety in general. RTM takes requirement-based test cases–composed of natural language test steps–as input and computes pairwise similarity between them. Specifically, we investigate three different preprocessing methods to normalize the test steps, followed by seven distinct text embedding techniques–both sentence-level and word-level–to transform the textual data into vector representations: Term Frequency–Inverse Document Frequency (TF-IDF), Universal Sentence Encoder (USE) [9], LongT5 [16], Amazon Titan Text Embedding V2 [5], Word2Vec [28], GloVe [35], FastText [8]. We then measure the similarity between test cases using three different distance metrics: cosine similarity and Euclidean distance for sentence-level embeddings, and Word Mover’s Distance (WMD) [21] for word-level embeddings. Based on the computed similarities, we employ a Genetic Algorithm (GA) [25] to search for the optimal test suite that minimizes similarity, adheres to a fixed budget, and ensures full requirement coverage. To help the GA efficiently find valid subsets, we devise three initialization strategies designed to generate an initial population consisting of subsets satisfying both coverage and budget constraints. RTM was evaluated on a real-world automotive dataset consisting of 736 system test cases across seven test runs, along with 54 requirements. The results demonstrate that RTM significantly outperforms two baseline techniques in terms of fault detection rate $( F D R )$ under various minimization budgets. Furthermore, we explore the impact of the redundancy level of the test suites on FDR. Our findings show that: $\textcircled{1}$ RTM consistently outperforms all baseline techniques across different redundancy levels; $\textcircled{2}$ The level of redundancy has a strong effect on the achieved FDR, for all TSM techniques; $\textcircled{3}$ Especially at lower minimization budgets and for highly redundant test suites, there is still significant room for improvement between RTM and the theoretical maximum FDR. In summary, our contributions include: • We propose RTM, a TSM approach tailored for requirement-based testing, where test cases are specified in natural language. RTM ensures full requirement coverage while operating under a fixed minimization budget, addressing a practical constraint often overlooked in prior work. We validate RTM on automotive system requirements and test cases, and compare it with a number of baseline techniques for test suite minimization across varying minimization budgets, demonstrating its superior fault detection capability. • We further investigate the impact of redundancy levels of test suites on the effectiveness of TSM, offering new insights into the relationship between test suite redundancy and TSM fault detection performance. • To foster reproducibility and facilitate further research, we have open-sourced our replication package—including source code and experimental scripts—which is publicly available at https://anonymous.4open.science/r/RTM5C98/. Industrial requirements and test cases are confidential. The rest of the paper is organized as follows: Section 2 defines the TSM problem and outlines the context of this study. Section 3 describes our approach: RTM. The study design is detailed in Section 4, followed by experimental results in Section 5. Section 6 discusses threats to validity. Section 7 reviews related work. Finally, Section 8 concludes the study with a discussion of future work. # 2 Problem Definition: Test Suite Minimization in Automotive Requirements Testing In this paper, we aim to minimize test suites derived from automotive system requirements. In this section, we first describe the format and structure of the test cases in our dataset, followed by a formal definition of the TSM problem in our specific context. # 2.1 Test Cases based on Automotive Requirements Like other critical systems, modern automotive software systems are expected to comply with functional safety standards such as ISO 26262 [39]. To ensure compliance, system engineers derive and maintain extensive test suites based on system requirements, which are executed to validate system behavior against expected requirements. Typically, test cases are specified in natural language, following templates, and then manually converted into test code and implemented by experts. The test cases in our dataset are collected from our industry partner and designed for testing automotive system requirements. As shown in Figure 1, these test cases are specified in natural language. Each test case consists of a sequence of diagnostic actions, structured into clearly labeled steps. These steps typically include setting up the preconditions for testing, creating the fault conditions, checking how the automotive system detects the issues through Diagnostic Trouble Codes (DTCs), and confirming the system properly clears the DTC. The detailed test actions involve reading and setting system variables, awaiting signal conditions, sending diagnostic requests, and checking system responses. To this end, the test cases exhibit the following characteristics: Structured and Precise: Test cases explicitly define preconditions, test steps, and expected outcomes. Domain-Specific Vocabulary: Test cases extensively utilize automotive-specific terminology, signals, variables, and abbreviations. However, as the number of system requirements increases, the corresponding test suites can become excessively large, leading to dramatically increased testing costs. Moreover, as new requirements are added, test cases are often written without checking the existing ones. This can lead to functional overlap between test cases, thus increasing redundancy in the test suite. STEP 1 Set Global Preconditions Read variable Variable_A STEP 2 Set Valid Preconditions Set System variable Variable_ $\mathit { \Theta } _ { 1 } ~ = ~ 1$ Await Value Match Signal SIGNAL_A = 1 STEP 3 Create Fault Condition Set System variable Variable $\mathbf { \nabla } _ { - 2 } { \mathbf { \nabla } } = { \mathbf { \nabla } } 1$ STEP 4 Verify DTC Maturation Time Check maturation time (Expected: X ms) STEP 5 Check the DTC is Active Read variable Variable_A Send request PATH_TO_REQUEST_A Check expected diagnostic response Set System variable Variable_1 $= 0$ Await Value Match Signal Variable_1 = 0 STEP 6 Remove DTC Condition Set System variable Variable $\_ \mathrm { ~ 2 ~ } = \ 0$ STEP 7 Verify DTC Dematuration Time Read variable Variable_A Check dematuration time (Expected: X ms) Send request PATH_TO_REQUEST_A Check expected diagnostic response # 2.2 Test Suite Minimization Test suite minimization (TSM) aims to reduce the size of the test suite while ensuring important testing objectives such as requirement coverage, fault detection capability, and test case diversity [17, 41]. Given that test suite diversity is reported to be positively correlated with fault detection capability [18], minimizing similarity among test cases can enhance the fault detection capability of the minimized test suite. When applied properly, TSM can significantly reduce testing costs and time while minimizing its impact on system quality or compliance. Let $\mathcal { R } = \{ r _ { 1 } , r _ { 2 } , . . . , r _ { n } \}$ be the set of system requirements for an automotive system, and let $\mathcal { T } = \{ t _ { 1 } , t _ { 2 } , \dots , t _ { m } \}$ be the corresponding test suite. Each test case $t _ { i } \in \mathcal { T }$ covers a subset of requirements denoted by $C o v e r ( t _ { i } ) \subseteq \mathscr { R }$ . Note that in our dataset, each test case only covers one requirement, whereas each requirement can be covered by many test cases. This is because each test case is specifically designed to validate one scenario where the requirement is expected to hold. Moreover, covering all requirements is a must in automotive testing, to comply with functional safety standards [39], which is also strictly required by our industry partners. Let $S i m ( t _ { i } , t _ { j } ) : \mathcal { T } \times \mathcal { T } [ 0 , 1 ]$ denote the similarity between test cases $t _ { i }$ and $t _ { j }$ . Given a test budget $\mathcal { M } \in ( 0 , 1 ]$ , the goal is to select a subset $\mathcal { T } ^ { \prime } \subseteq \mathcal { T }$ such that: • Coverage: $\begin{array} { r } { \bigcup _ { t \in \mathcal { T } ^ { \prime } } C o v e r ( t ) = \mathcal { R } , } \end{array}$ • Budget: $| { \mathcal { T } } ^ { \prime } | = | { \mathcal { T } } | \times M$ , Diversity: the total pairwise similarity among test cases in ${ \mathcal { T } } ^ { \prime }$ is minimized. Formally, this can be defined as the following optimization problem: Given: test suite $\mathcal { T }$ , requirement set $\mathcal { R }$ , budget $\mathcal { M } \in ( 0 , 1 ]$ , coverage function $C o v e r ( t ) \subseteq \mathcal { R }$ for each $t \in \mathcal { T }$ , and similarity function $S i m ( t _ { i } , t _ { j } )$ . Find: a subset $\mathcal { T } ^ { \prime } \subseteq \mathcal { T }$ such that: $$ \begin{array} { r l } & { | \mathcal { T } ^ { \prime } | } \\ & { \Big \bigcup _ { t \in \mathcal { T } ^ { \prime } } C o v e r ( t ) \quad \quad = \mathcal { R } , } \\ & { \underset { \substack { t _ { i } , t _ { i } \in \mathcal { T } ^ { \prime } } } { \sum } S i m ( t _ { i } , t _ { j } ) \mathrm { ~ i s ~ m i n i m i z e d . } } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \end{array} $$ This problem extends the classical set cover problem, which is known to be NP-hard, by introducing two additional constraints: a strict minimization budget and a diversity objective. In particular, the selected test subset must maintain full requirement coverage under a fixed budget while minimizing the overall similarity among test cases. As a result, solving this problem exactly is computationally intractable for large-scale industrial systems. Therefore, practical heuristics or approximation strategies are typically employed in automotive domains to achieve efficient and effective test suite minimization, especially when additional factors, such as fault detection capability, execution cost, or coverage redundancy, are also considered. # 3 Approach This section introduces RTM, a TSM approach that satisfies the three criteria outlined in Section 2.2. It targets test cases derived from system requirements, for the purpose of validation. In short, it leverages natural language processing (NLP) techniques and large language models (LLMs) as well as various distance functions to compute test case similarity and then employs evolutionary search to optimize minimization. As illustrated in Figure 2, RTM first preprocesses test cases using three different strategies, then converts them into vector representations using a variety of text embedding techniques, including both sentence-level and word-level representations: TF-IDF, USE, LongT5, Amazon Titan Text Embedding V2, Word2Vec, GloVe, and FastText. Different distance functions are employed to compute the similarity values between test case embeddings, which are then used as guidance for the evolutionary search to find the optimal minimized test suite ${ \mathcal { T } } ^ { \prime }$ with diverse test cases. The primary objective of RTM is to find the optimal ${ \mathcal { T } } ^ { \prime }$ that maximizes fault detection capability by minimizing the similarity values between the test cases while reducing the testing cost, measured in terms of test suite size, as well as ensuring $1 0 0 \%$ requirement coverage. # 3.1 Test Case Preprocessing Given that the test cases are expressed in natural language, different preprocessing strategies may affect the outcomes of the text embedding techniques. To investigate this, we implemented three levels of preprocessing: • Preprocessing Method 1 $( P M 1 )$ converts all characters to lowercase, removes extra white spaces between words, and eliminates line breaks (e.g., $\mathsf { \backslash r } \mathsf { \backslash n }$ ). • Preprocessing Method 2 (𝑃𝑀2) extends 𝑃𝑀1 by further stripping punctuation and performing tokenization and lemmatization. • Preprocessing Method 3 (𝑃𝑀3) serves as a control setting, where no preprocessing is applied, and the raw test case content is used as-is. Fig. 2. Overall Framework of RTM. # 3.2 Test Case Representation After the preprocessing step, we transform the test cases into vector representations using various embedding techniques. We employ both word-level and sentence-level embedding techniques to capture semantic information from test cases. For word-level embeddings, we use Word2Vec, GloVe, and FastText, each of which converts every word in the test case to a 300−dimensional vector. For sentence-level embeddings, we employ TF-IDF, USE, LongT5, and Amazon Titan Embedding V2, each of which converts the entire test case content to a fixed-length vector representation that captures its overall semantic meaning, with the dimensionality varying depending on the embedding technique used. # 3.3 Similarity Measure To assess the similarity between test case pairs, we utilize different distance metrics based on the type of embedding used. For word-level embeddings (i.e., Word2Vec, GloVe, and FastText), we use WMD [21], which measures the minimum cumulative distance required to align the words in one test case with those in another. WMD leverages the underlying word embedding space and is particularly effective in capturing semantic similarity when the word order or exact word match is less important [21]. For sentence-level embeddings (i.e., TF-IDF, USE, LongT5, and Amazon Titan Embedding V2), we applied both cosine similarity and Euclidean distance, which have been widely used in machine learning, information retrieval, and NLP tasks to measure similarity or dissimilarity between vectors [15, 19, 23, 38]. However, they operate differently and are suited for different use cases: cosine similarity measures the angle between two vectors while ignoring the magnitude, whereas Euclidean distance measures the straight-line distance between two vectors. By applying different types of similarity metrics, we aim to explore how different vector representations and distance functions affect the ability of RTM to detect redundant test cases. This step constructs a similarity matrix by computing the pairwise similarity between test cases. It takes the vector representations of test case pairs as input and calculates the similarity scores to quantify their similarity. # 3.4 Evolutionary Search for Test Suite Minimization 3.4.1 Problem Formalization. Given the test suite before minimization $\mathcal { T }$ , the minimization problem is defined as a fixed-size subset selection with $1 0 0 \%$ requirement coverage as the constraint. Each solution (i.e., subset ${ \mathcal { T } } ^ { \prime }$ ) is represented as a binary vector: $$ \mathbf { x } = [ x _ { 1 } , x _ { 2 } , . . . , x _ { m } ] , \quad x _ { i } = { \left\{ \begin{array} { l l } { 1 , } & { { \mathrm { i f ~ t e s t ~ c a s e ~ } } i { \mathrm { ~ i s ~ s e l e c t e d } } } \\ { 0 , } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. } $$ where $m$ denotes the total number of test cases in the test suite before minimization (i.e., $\mathcal { T }$ ). Given the minimization budget $\mathcal { M } \in ( 0 , 1 ]$ , the number of test cases in the subset ${ \mathcal { T } } ^ { \prime }$ is constrained by: $$ \sum _ { i = 1 } ^ { m } x _ { i } = { \mathrm { r o u n d } } \left( m \cdot M \right) $$ here round() denotes rounding the number to the nearest integer. Moreover, the test cases in the subset must cover all the requirements: $$ \bigcup _ { \{ i | x _ { i } = 1 \} } C o v e r ( i ) = \mathcal { R } $$ where $C o v e r ( i )$ is the requirement covered by test case $i$ and $\mathcal { R }$ is the set of requirements covered by the test suite before minimization (i.e., $\mathcal { T }$ ). Given that the objective of the search process is to minimize the similarity values between the pairs of selected test cases, the fitness function is formulated as: $$ F i t n e s s = \frac { \sum _ { i , t _ { i } \in \mathcal { T } _ { n } ^ { \prime } } ( M a x _ { i , j , t _ { i } , t _ { j } \in \mathcal { T } _ { n } ^ { \prime } , i \neq j } \ N o r m \_ S i m ( t _ { i } , t _ { j } ) ) ^ { 2 } } { n } $$ where $\mathcal { T } _ { n } ^ { \prime }$ denotes the subset containing $n$ test cases, and 𝑁𝑜𝑟𝑚_ $S i m ( t _ { i } , t _ { j } )$ represents the normalized similarity score between test cases pair $( t _ { i } , t _ { j } )$ . 3.4.2 Initialization strategies. Our minimization problem is defined with two hard constraints: (1) the percentage of selected test cases in the subset should satisfy the minimization budget, and (2) the subset should preserve $1 0 0 \%$ requirement coverage. In order to efficiently guide the search to find valid subsets, we devise three different strategies to initialize a set of valid subsets for the GA to use as a base for further search. Note that in our dataset, the relationship between test cases and requirements is many-to-one. Consequently, one limitation of the minimization problem with full requirement coverage is that the pre-defined number of selected test cases must not be smaller than the total number of requirements. • Strategy 1. Iterative Selection This strategy iteratively selects one test case randomly under each requirement, continuing this process until the total number of selected test cases reaches the minimization budget. # • Strategy 2. Initial Requirement $^ +$ Random – Step 1: Randomly select one test case for each requirement. – Step 2: Randomly select the remaining test cases without considering requirements until the total number of selected test cases reaches the minimization budget. Strategy 3. Proportional Selection – Step 1: Randomly select a proportion of test cases (approximately) equal to the minimization budget under each requirement. – Step 2: If the total number of selected test cases across all requirements is less than or greater than the minimization budget, iteratively add or reduce the number of selected test cases per requirement, ensuring every requirement is still covered, until the desired minimization budget is reached. 3.4.3 Evolutionary Search. After initializing a population of valid subsets using the strategies described in Section 3.4.2, we apply a GA to iteratively evolve the population toward solutions that minimize redundancy while preserving full requirement coverage and complying with the minimization budget. Each individual in the population is encoded as a binary vector, as defined in Equation 4. The fitness of an individual is computed using the objective function described in Section 3.4.1. At each generation, we perform the following operations: • Selection: We used a binary tournament selection operator that prioritizes valid subsets—those that cover all requirements—over invalid ones. When both candidate subsets are valid, the one with the lower fitness value is selected. • Crossover: We used the customized crossover operator [7] that ensures the offspring maintains a fixed subset size. It first includes all test cases shared by both parent subsets, then adds the test cases present in either parent subset until the minimization budget is met. • Mutation: We used an inversion mutation operator [7] that randomly selects a segment of the subset and reverses its order. Since each subset is represented as a binary vector, this approach preserves the subset size. Note that crossover and mutation may produce invalid individuals (i.e., those that violate the coverage constraint). To address this, we experimented with incorporating a repair operator during the search process. The repair operator randomly adds one test case for each uncovered requirement and removes the test cases under requirements that are covered by more than one test case, therefore reestablishing full coverage while maintaining the budget constraint for each solution. However, the results obtained using the repair operator were slightly worse than those without it. This can be attributed to the fact that the repair operator reduces the diversity of the solutions. Therefore, we decided not to use the repair operator in the subsequent experiments. We follow the published guidelines 1 used in prior work [34] for setting GA hyperparameters. Specifically, we use a population size of 100, a mutation rate of 0.01, and a crossover rate of 0.90. For each generation, the size of each solution (i.e., subset) remains fixed and equal to the minimization budget. The evolutionary process is repeated until the convergence, which is defined as the improvement in the fitness score being less than 0.0025 across generations. The final output is the best subset identified during the search, which satisfies all constraints and minimizes the redundancy among selected test cases, indicated by the lowest fitness value. # 4 Study Design # 4.1 Research Questions RQ1: How does RTM perform regarding TSM under different configurations? This RQ aims to identify the optimal configuration of RTM that achieves the best performance (i.e., FDR). Specifically, a RTM configuration entails the selection of a preprocessing method (Section 3.1), an embedding technique (Section 3.2), a similarity measure (Section 3.3), and an initialization strategy (Section 3.4.2). RQ2: How does RTM compare to state-of-the-art TSM techniques? This RQ aims to compare the performance of RTM with two baseline approaches, namely Random Minimization and FAST-R [11], under various minimization budgets (i.e., $1 0 \% , 2 0 \% , . . . , 9 0 \%$ . RQ3: How does the redundancy level of the test suite impact the effectiveness of TSM approaches? The performance of the TSM technique is influenced not only by the selected configuration but also by the characteristics of the test suites. This RQ aims to investigate and quantify the impact of the test suite redundancy level on the effectiveness (i.e., FDR) of TSM. # 4.2 Dataset We evaluated RTM and compared it to baselines on a dataset collected from our industry partner, an automotive system company. There are 736 test cases across seven test runs. As shown in Figure 1, the test cases consist of test steps that validate the behavior of the automotive system related to a specific DTC. Each test case covers one requirement, with a total of 54 requirements covered, and may detect multiple faults, with a total of 220 unique faults detected. Note that we identified the faults, which are the root causes of the failing test cases (e.g., the parameter values mismatched, the connection error of specific components), using the test execution logs under the guidance of the system engineers. For each test case, we first extracted the failure messages from the test execution logs and then grouped the ones that were triggered by the same root cause into faults. Test suites for different redundancy levels. In this dataset, one fault can be detected by multiple test cases, indicating a degree of redundancy in the test suite. We define the redundancy level for the test suite as follows: $$ R L = \frac { \sum _ { i = 1 } ^ { m } t _ { f _ { i } } } { F _ { u n i q u e } } $$ where $m$ is the total number of test cases in this test suite, $t _ { f _ { i } }$ is the number of faults detected by the test case $t _ { i }$ , and $F _ { u n i q u e }$ is the number of unique faults detected by this test suite. For example, for a test suite $\mathcal { T }$ with three test cases, each of which detects 3, 5, and 3 faults, respectively. The total number of unique faults detected by $\mathcal { T }$ is 4, and the redundancy level of $\mathcal { T }$ is therefore $( 3 + 5 + 3 ) / 4 = 2 . 7 5$ . Similarly, the redundancy level of our dataset is $2 6 1 0 / 2 2 0 = 1 1 . 8 6$ . In order to assess the impact of the test suite redundancy level on the performance of RTM, we generate 10 diverse test suites for each redundancy level using Integer Linear Programming (ILP) 2 and GA. This process starts by utilizing ILP to search for 100 test suites with different test suite sizes while satisfying three constraints: (1) $100 \%$ requirement coverage, (2) $1 0 0 \%$ fault coverage, and (3) a specific test redundancy level. We set 15 different redundancy levels, ranging from 4.5 to 11.5, with an interval of 0.5. Note that there is no test suite that satisfies a redundancy level below 4.5 under constraints (1) and (2), and the highest redundancy level is 11.86, which is observed in the original test suite. Then, to enhance the diversity of the dataset for experiments, for the 100 test suites under each redundancy level, we employ GA to search for the most diverse 10 test suites that minimize the sum of the set similarity between test suites, which is defined as the Jaccard Similarity between two test suites: $$ J a c c a r d S i m i l a r i t y = \frac { | T _ { i } \cap T _ { j } | } { | T _ { i } \cup T _ { j } | } $$ where $\vert T _ { i } \cap T _ { j } \vert$ denotes the number of test cases in the intersection of test suite $T _ { i }$ and $T _ { j }$ $\cdot | T _ { i } \cup T _ { j } |$ denotes the number of test cases in the union of test suites $T _ { i }$ and $T _ { j }$ . # 4.3 Text Representations Inspired by prior work on vector representations of requirements [1, 45] and test cases [40], we employ a variety of text embedding techniques to convert test cases into vector representations. Specifically, we use TF-IDF, USE, Amazon Titan Text Embedding V2, and LongT5 to generate sentence-level embeddings, while Word2Vec, GloVe, and FastText are used to obtain word-level embeddings. Due to limitations in computational resources, data privacy constraints, and input token length restrictions, we were unable to experiment with models such as BERT [12] and Sentence-BERT [36]. TF-IDF. TF-IDF has demonstrated promising performance in identifying similar test cases expressed in natural language [40]. We also employed TF-IDF to extract numerical vector representations of the test cases. For each word, we calculated its importance within a single test case relative to its occurrence across all other test cases. USE (Universal Sentence Encoder) [9]. USE encodes text into high-dimensional vectors, capturing semantic information at the sentence level, making it particularly effective for tasks such as semantic similarity, text classification, clustering, and information retrieval [9]. LongT5 [16] LongT5 is an extension of the T5 architecture designed to efficiently handle long sequences [16]. It incorporates sparse attention and reversible layers to reduce memory usage, making it well-suited for tasks like longdocument summarization and retrieval-augmented generation. We selected the local attention mechanism for LongT5 as it yielded a higher FDR for TSM compared to the global attention mechanism. Amazon Titan Text Embedding V2 [5]. For LLM-based embedding, we use Amazon Titan Text Embedding V2, which is provided by our industrial partners. The model takes test case steps as input and produces a 1, 024-dimensional vector representation for test cases. Word2Vec [28]. Word2Vec has been shown to be effective for identifying similar test cases [40]. In this study, we used the Continuous Bag-of-Words (CBOW) architecture with a window size of 10 as the configuration for Word2Vec, as this setup yielded the highest FDR result for TSM. This was determined with experiments in which we evaluated both CBOW and Skip-Gram architectures with window sizes ranging from 2 to 10 in increments of 2. We trained Word2Vec models on the entire test case corpus and transformed each word into 300-dimensional numerical vectors. GloVe [35]. GloVe is an unsupervised learning algorithm that leverages global word-word co-occurrence statistics from a corpus to learn word embeddings [35]. Its core idea is that the ratio of co-occurrence probabilities can capture meaningful linear substructures within the word vector space. We selected a window size of 2 for GloVe according to its performance in the hyperparameter tuning experiments conducted with window sizes ranging from 2 to 10 in increments of 2. Same as Word2Vec, we used GloVe to generate 300-dimensional numerical vector representation for each word in the test cases. FastText [8]. FastText is a word embedding model proposed by Bojanowski et al. [8], which extends Word2Vec by incorporating subword information. Each word is represented as a combination of character n-grams, enabling the model to better handle rare and out-of-vocabulary words, and to generalize more effectively—particularly in morphologically rich or noisy text. We used the CBOW architecture with a window size of 8 as the configuration for FastText, according to the results of experiments in which we evaluated both CBOW and Skip-Gram architectures with window sizes ranging from 2 to 10 in increments of 2. We used FastText to convert each word in the test cases into a 300-dimensional numerical vector. # 4.4 Baselines We compare RTM to Random Minimization and FAST-R [11] 4.4.1 Random Minimization. Since the goal of TSM is to reduce the number of test cases and, consequently, lower testing costs, the most straightforward approach is to randomly remove test cases to meet the minimization budget [2]. We compare the performance of RTM against random minimization, with and without considering the requirement coverage constraint. For the version with the constraint, we applied the same selection strategy as GA initialization described in Section 3.4.1. For the version without the constraint, we randomly select $k$ test cases from the test suite while ensuring $k$ adheres to the minimization budget. 4.4.2 FAST-R [11]. FAST-R is a family of similarity-based TSM approaches, including $\mathrm { F A S T + + }$ , FAST-CS, FAST-pw, and FAST-all. ${ \mathrm { F A S T } } + +$ and FAST-CS utilize k-means $^ { + + }$ clustering [4] and constructed coresets [6]. In contrast, FAST-pw and FAST-all leverage minhashing and locality-sensitive hashing [22]. FAST-R can be applied in two scenarios, namely the fixed-budget and adequate scenario. The fixed-budget version of FAST-R minimizes the test suite using a pre-defined minimization budget, while the adequate version prioritizes preserving the requirement coverage of the minimized test suite over adhering to the budget. One limitation of FAST-R is that it cannot simultaneously guarantee both minimization budget and full requirement coverage. We compare RTM against FAST-R under both scenarios where FAST-R is applicable (i.e., fixed-budget and adequate test suite minimization). For FAST-R, we relied on the publicly available replication package provided by its authors to ensure a fair and consistent comparison. # 4.5 Evaluation Metrics Fault Detection Rate (FDR). We use the FDR to assess the effectiveness of RTM, which is defined as follows: $$ F D R = { \frac { F ^ { ' } } { F } } $$ where $F$ is the total number of unique faults detected by the test suite before minimization, $F ^ { ' }$ is the total number of unique faults detected by the test suite after minimization. FDR ranges from 0 to 1, where 0 indicates that the minimized test suite detects no faults, while 1 signifies that it successfully detects all faults. Requirement Coverage. Although RTM ensures full coverage of all requirements, one of the selected baseline approaches, i.e., FAST-R, can not guarantee complete coverage when ensuring the minimization budget. To provide a more comprehensive comparison, we also evaluate the requirement coverage of the baseline approaches. This is calculated as the ratio of the number of requirements covered by the minimized test suite to the total number of original requirements: $$ C o v e r a g e = \frac { | R ^ { \prime } | } { | R | } $$ where $R ^ { \prime }$ represents the set of requirements covered by the minimized test suite $T ^ { \prime }$ , and $R$ denotes the original set of requirements. The coverage value ranges from 0 to 1, where 0 indicates that the minimized test suite covers no requirements, while 1 signifies full coverage of all requirements. # 4.6 Experiment Environment All experiments were conducted on a computer provided by our industry partner, running Windows 10, equipped with an Intel Core i7-11850H CPU at 2.5 GHz and 32 GB of RAM. Due to data privacy considerations, we were restricted to performing our experiments solely on this laptop, which limited the choice of text representation techniques available to us. # 5 Results # 5.1 Performance of RTM under different configurations (RQ1) Approach. To address RQ1, we systematically evaluated the performance of RTM under various configurations. Specifically, under a mid-range minimization budget $( 5 0 \% )$ , we ran RTM with three different preprocessing strategies (as described in Section 3.1), and applied both word-level and sentence-level embedding techniques to transform test cases into vector representations (Section 3.2). For measuring test case similarity, we compared several widely-used similarity metrics, including cosine similarity, Euclidean distance, and WMD. In addition, we investigated the impact of various GA initialization strategies on the overall performance of RTM by evaluating all three strategies introduced in Section 3.4.2. Through a comprehensive grid search across these configurations, we identified the optimal configuration of RTM, i.e., the combination of preprocessing method, embedding technique, similarity metric, and initialization strategy that yields the best performance in terms of FDR. Results. Table 1 presents the FDR of RTM across different configurations under $5 0 \%$ minimization budget. For each combination of initialization strategy and embedding technique, the best FDR is shown in bold, while the overall best result for each embedding technique is highlighted with a gray background. Overall, we observe that sentence-level embeddings generally outperform word-level embeddings in terms of FDR. Among the sentence-level embedding techniques, TF-IDF combined with cosine similarity achieves the best overall performance, with a peak FDR of $8 6 . 0 9 \%$ under $P M 3$ and Init Strategy 2. This suggests that even simple, sparse representations like TF-IDF can be highly effective for TSM, especially when the test cases consist of structured test steps that mainly differ in variable names and parameter values. When combined with appropriate preprocessing methods and initialization strategies, TF-IDF effectively captures the discriminative textual features in such test cases by leveraging word frequency. Similarly, Amazon Titan Text Embedding V2, LongT5, and USE also achieve competitive results, with the best Titan configuration reaching $8 1 . 5 9 \%$ ( $P M 3$ , Init Strategy 2, Euclidean distance), LongT5 achieving $7 8 . 5 0 \%$ (𝑃𝑀3, Init Strategy 3, Euclidean distance), and USE achieving $7 8 . 0 5 \%$ ( $P M 1$ , Init Strategy 3, Euclidean distance). For word-level embeddings, Word2Vec slightly outperforms FastText and GloVe, but all three show lower FDR compared to sentence-level embedding techniques. Their performance peaks are in the range of $7 5 \% - 7 6 \%$ , with Word2Vec achieving the highest result of $7 6 . 2 7 \%$ under $P M 1$ and Init Strategy 2. Among similarity measures, Euclidean distance outperforms cosine similarity for sentence-level embeddings most of the time, suggesting that the absolute distance measured by Euclidean-based spatial separation is more effective for capturing test case diversity in the embedding space. In terms of preprocessing, we observe that, for word-level embedding, 𝑃𝑀1 and 𝑃𝑀2 often yield better FDR results than 𝑃𝑀3, indicating that preprocessing is required for using word-level embedding techniques. Word-level embedding is more sensitive to individual characters such as casing and punctuation, and thus proper preprocessing helps normalize these variations, enabling more accurate similarity computations. On the other hand, 𝑃𝑀1 and 𝑃𝑀3 tend to lead to higher FDR than using 𝑃𝑀2 for sentence-level embeddings, meaning that less or no preprocessing can help these models better learn context information from test cases, without losing the key details that distinguish them. Lastly, Initialization Strategies 2 and 3 generally lead to higher FDR compared to Initialization Strategy 1, particularly when combined with stronger embeddings and similarity measures. This confirms the importance of starting the evolutionary search with a well-distributed and diverse initial subset. Answering RQ1: TF-IDF with cosine similarity yields the highest overall FDR $( 8 6 . 0 9 \% )$ ) under Preprocessing Method 𝑃𝑀3 and Initialization Strategy 2 at the $5 0 \%$ minimization budget. Table 1. FDR $( \% )$ across different configurations $5 0 \%$ minimization budget) # 5.2 Performance of RTM compared with baselines (RQ2) Approach. To address RQ2, we compared the performance of RTM with the best configuration against baseline approaches in terms of $F D R$ , varying the minimization budget from $1 0 \%$ to $9 0 \%$ in $10 \%$ increments. According to RQ1, we selected RTM using $P M 3$ as preprocessing strategy, TF-IDF as the embedding technique, Initialization Strategy 2 for GA initialization, and cosine similarity for the similarity measure as the best-performing configuration. We evaluated RTM against Random Minimization, which also employs Initialization Strategy 2, and two versions of FAST-R [11], namely the fixed-budget version and the adequate version. For the fixed-budget version, we varied the minimization budget from $10 \%$ to $9 0 \%$ , in increments of $10 \%$ . For the adequate scenario, we first executed FAST-R to determine its resulting minimization budget. This budget was then applied consistently to both RTM and the random minimization for experimental comparison. Notably, due to the redundancy in our dataset, as few as $7 \%$ (i.e., 54 test cases, one test case for each requirement) of the test cases are sufficient to achieve $1 0 0 \%$ requirement coverage. Moreover, for comparison purposes, we calculated the theoretical best possible FDR for the test suite under different minimization budgets using an ILP approach. Specifically, we employed ILP to find the subset that (1) satisfies the minimization budget, (2) achieves $100 \%$ requirement coverage, and (3) maximizes the number of covered faults. Results. Figure 3 depicts the FDR for RTM, Random Minimization and the fixed-budget version of FAST- $\mathrm { R , }$ as well as the theoretical best possible FDR across minimization budgets from $10 \%$ to $9 0 \%$ . Overall, RTM consistently outperforms all other techniques across minimization budgets. The performance gap is particularly notable at mid-range budgets $3 0 \%$ to $6 0 \%$ ). For instance, at a $4 0 \%$ minimization budget, RTM achieves an FDR of $7 8 . 7 \%$ , whereas Random Minimization and ${ \mathrm { F A S T } } + +$ yield approximately $6 5 . 5 \%$ and $5 6 . 5 \%$ , respectively. This demonstrates RTM’s superior ability to retain fault-revealing test cases, especially under mid-range budget constraints. At lower and higher minimization budgets, the FDR of all techniques tends to converge toward either 0 or 1, thus making the difference in FDR between RTM and other techniques smaller. However, mid-range minimization budgets are more important in practice, as they offer more useful trade-offs between cost savings and fault detection. As the minimization budget increases, the FDR of all techniques gradually improves and approaches the theoretical upper bound of 1.0. Among the FAST-R variants, FAST-CS and FAST-all demonstrate the more competitive performance, while $\mathrm { F A S T + + }$ and FAST-pw exhibit weaker results at lower budgets, indicating limited effectiveness in resourceconstrained settings. Interestingly, Random Minimization consistently achieves higher FDR than FAST-R. This may be due to the distance metrics employed by FAST-R, which may not provide effective guidance for reducing similar test cases, whereas Random Minimization inherently preserves a greater diversity of test cases, leading to better FDR. For the adequate version of FAST-R, after reducing similarity among test cases while preserving requirement coverage of the minimized test suite, ${ \mathrm { F A S T } } + +$ , FAST-CS, and FAST-pw ultimately retained only $7 \%$ (i.e., selecting one test case under each requirement) of the test cases, while FAST-all retained $9 0 \%$ . We therefore compared RTM and Random Minimization using $7 \%$ and $9 0 \%$ as the minimization budgets with the adequate version of FAST-R. The detailed results are presented in Table 2. Under the $7 \%$ budget, RTM achieves a high $F D R$ of $2 6 . 3 2 \%$ while maintaining full requirement coverage $( 1 0 0 \% )$ ), which is comparable to $\mathrm { F A S T + + } \left( 2 7 . 2 7 \% \right)$ and FAST-CS $( 2 5 . 9 5 \% )$ ), and outperforming FAST-pw in both FDR $( 2 1 . 1 8 \% )$ and requirement coverage $( 8 9 . 4 4 \% )$ . Random Minimization achieves FDR of $2 5 . 9 1 \%$ and $1 9 . 8 6 \%$ with and without full requirement coverage constraint, respectively. Under the $9 0 \%$ minimization budget, RTM achieves the highest FDR $( 9 8 . 5 9 \% )$ ) among all techniques, maintaining full requirement coverage. In contrast, FAST-all shows slightly lower $F D R \left( 9 2 . 5 9 \% \right)$ and slightly reduced coverage $( 9 9 . 6 3 \% )$ . Random Minimization yields lower $F D R { - } 9 6 . 3 6 \%$ with full requirement coverage constraint and $9 5 . 4 1 \%$ without requirement coverage constraint. These results demonstrate that RTM consistently delivers high FDR without compromising requirement coverage, outperforming all baselines in the adequate scenario. Table 2. FDR $( \% )$ and Requirement Coverage $( \% )$ Comparison at Adequate Scenario for FAST-R RM: Random Minimization In summary, compared with the baselines, RTM achieves the highest FDR across all minimization budgets, making it a more reliable choice for practical TSM with strong fault detection guarantees. In addition, RTM ensures full requirement coverage while enabling pre-defined minimization budgets, two features not simultaneously supported by FAST-R. These capabilities are particularly crucial in the context of requirement-based testing, where covering all requirements is often a strict necessity. Moreover, the ability to ensure fixed-budget minimization is essential in resource-constrained environments, where testing time or execution cost must be carefully managed. Answering RQ2: RTM consistently outperforms all baselines in both FDR and requirement coverage under both the fixed-budget and adequate versions. Fig. 3. Comparison of FDR across minimization budgets for RTM, FAST-R, and Random Minimization, alongside the theoretical upper bound. # 5.3 Impact of test suites redundancy (RQ3) Approach. In RQ3, we investigate the impact of test suite redundancy levels on the performance $( \mathrm { i } . \mathrm { e } . , F D R )$ of TSM techniques. Evaluating TSM techniques necessarily requires considering the test suite redundancy level, and we aim to shed light on this aspect. As described in Section 4.2, for each redundancy level (ranging from 4.5 to 11.5 in increments of 0.5), we employ the ILP algorithm and GA to generate 10 diverse test suites, each ensuring full coverage of both requirements and faults. We then apply RTM and the baseline methods to each of these test suites, and report the average FDR across the 10 test suites for each redundancy level. Results. Figure 4 illustrates the FDR performance of RTM against baseline methods ${ \mathrm { ( F A S T + + } }$ , FAST-CS, FAST-all, FAST-pw, and Random Minimization with and without full requirement coverage constraint) across varying redundancy levels (ranging from 4.5 to 11.5 in increments of 0.5) under seven distinct minimization budgets (from $3 0 \%$ to $9 0 \%$ in increments of $1 0 \%$ ). Moreover, for each test suite, we calculate the theoretical best FDR for comparison purpose. Note that we cannot set minimization budgets below $3 0 \%$ as some test suites are unable to achieve full requirement coverage under those constraints. We observe that RTM consistently achieves superior FDR compared to all baseline techniques across all redundancy levels and minimization budgets. Specifically, RTM more closely aligns with the theoretical optimum and consistently outperforms all other techniques. The difference in terms of FDR between RTM and baselines is particularly significant at mid-range minimization budgets $( 3 0 \% - 6 0 \%$ ). As expected, the FDR of all techniques improves as the test suite redundancy level increases, indicating that the TSM techniques tend to perform better–achieving higher FDR–when applied to more redundant test suites. This trend, however, becomes less evident at higher budgets $8 0 \%$ and $9 0 \%$ ), where the FDR of all techniques converge toward 1. Random Minimization (with and without requirement coverage constraint) slightly outperforms other FAST-R variants but consistently falls short of RTM. We also observed that across all minimization budgets and redundancy levels, Random Minimization with the requirement constraint consistently outperformed the unconstrained variant in terms of FDR. This demonstrates that, for our dataset, explicitly ensuring that every requirement is covered not only preserves full requirement coverage but also significantly boosts the test suite’s ability to detect faults. In other words, prioritizing requirement coverage serves as an effective heuristic for enhancing fault detection under budgetary constraints. Moreover, the analysis reveals that redundancy level significantly influences FDR at lower minimization budgets (budgets below $6 0 \%$ ), highlighting the importance of considering redundancy under tight budget constraints. Notably, higher redundancy combined with low minimization budgets tends to result in larger gaps between the FDR of all TSM techniques and the theoretical maximum. This discrepancy might be explained as under a very low minimization budget, the search space is severely restricted, making it challenging to assemble a diverse subset that both covers all requirements and maximizes fault coverage. Future research should account for redundancy levels when evaluating TSM techniques. Fig. 4. Comparison of $F D R$ Across Varying Redundancy Levels for RTM and Baseline Approaches Under Seven Minimization Budgets Answering RQ3: The test suite redundancy level has a strong effect, in general, on the achieved FDR of all TSM techniques. Further, RTM consistently outperforms all baseline techniques in terms of FDR across all redundancy levels and minimization budgets. # 6 Threats to Validity In this study, we identified the following threats to validity: External Validity. The generalizability of our findings may be limited due to the specific domain and nature of the test suites provided by our industry partner. The performance of our approach might differ when applied to other domains or software systems with different requirement structures, test suite characteristics, or fault profiles. To mitigate this threat, we selected test suites that vary in size and redundancy levels to evaluate the robustness of our approach under diverse configurations. Furthermore, our approach is designed to be domain-agnostic, relying solely on textual information from requirements and test cases, which facilitates potential adaptation to other domains. Last, our industry partner is following standard development practices in the automotive domain, imposed by standards, that are similar to other safety-critical domains where functional safety is important. Future work includes evaluating our approach on publicly available datasets from different domains to further assess the generalizability. Internal Validity. To ensure a fair and consistent comparison, we adopt the original replication package of FAST-R provided by its authors. This minimizes potential implementation biases or configuration inconsistencies that could otherwise affect the validity of comparative results. Another threat to external validity arises from the choice of embedding techniques that is limited by the data privacy constraints of our industry partner. This limitation may have affected the overall performance of our approach. Nevertheless, to mitigate this threat, we evaluated seven different embedding methods within the bounds of our available computational resources. This broad comparison helps reduce concerns regarding internal validity, despite the constraints on embedding selection. Conclusion Threats. To reduce the impact of randomness and increase the reliability of our results, we run each experiment ten times with different random seeds and report the average performance. This mitigates the risk of drawing conclusions based on outlier results or fluctuations due to stochastic components in the minimization process. # 7 Related Work A number of approaches have been proposed to support TSM, which can be classified in to three categories [11, 20, 33]: greedy [29, 32], clustering [10, 11, 24, 40], and search-based [18, 26, 33, 34, 44], plus hybrid combinations [3, 42, 43] thereof. Greedy-based approaches follow a heuristic, step-by-step selection strategy, where the test case that provides the most benefit toward the objective is chosen at each iteration, until the given constraints (e.g., full requirement coverage or code path coverage) are satisfied. Miranda et al. [29] and Noemmer et al. [32] both employed greedy heuristics for TSM based on statement coverage. Their approaches iteratively selected test cases to maximize code coverage, achieving reasonable FDRs while significantly reducing the number of executed test cases. Greedy-based approaches are computationally efficient and easy to implement but may suffer from suboptimal global performance due to their locally optimal decisions Clustering-based approaches group similar test cases based on syntactic or semantic similarities, such as those derived from natural language descriptions or execution behavior. The goal is to reduce redundancy by selecting representative test cases from each cluster. These approaches are particularly useful in settings where test cases are written in natural language and code coverage information is unavailable. Viggiato et al. [40] proposed a comprehensive approach that combines text embedding, similarity measurement, and clustering to identify and remove similar test cases written in natural language, where they split requirement test cases into test steps and then calculate the similarity between test steps. Cruciani et al. [11] proposed FAST-R, a black-box TSM approach that leverages the source code of test cases without requiring execution or coverage information. FAST-R transforms test code into vector representations using a term frequency model, followed by dimensionality reduction via random projection. Similarly, Coviello et al. [10] proposed a clustering-based approach using Hierarchical Agglomerative Clustering to perform TSM. Search-based approaches formulate the minimization problem as an optimization task and utilize metaheuristic algorithms (e.g., genetic algorithms, simulated annealing) to explore the space of possible test subsets. These methods are capable of balancing multiple conflicting objectives, such as minimizing the number of test cases while maximizing fault detection capability and maintaining requirement coverage. Hemmati et al. [18] applied a search algorithm that minimizes test suites based on model-derived test case similarity. Similarly, Zhang et al.[44] proposed UncerTest, a model-based test case minimization framework that leverages multi-objective search algorithms, including NSGA-II, MOCell[31], and SPEA2 [46]. ATM [33] is a black-box TSM approach that combines test code similarity analysis with evolutionary search. It represents test code using Abstract Syntax Trees and employs Genetic Algorithm and Non-Dominated Sorting Genetic Algorithm II [27] as the underlying evolutionary algorithms. Furthermore, LTM [34] leverages LLMs (CodeBERT, UniXcoder, and CodeLlama) to compute similarity between test cases and employs a Genetic Algorithm to search for an optimal subset of test cases under a fixed budget. ${ \mathrm { M O R E } } + { }$ [26] is a multi-objective TSM approach that uses NSGA-II to optimize structural, functional, and cost-related objectives, aiming to enhance fault detection while reducing redundancy and execution time. Although typically more computationally intensive than greedy and clustering approaches, search-based techniques support multiple minimization objectives and often achieve better performance for TSM. Hybrid approaches integrate complementary methods into a unified framework to balance multiple objectives, including reducing test suite size, preserving fault detection capability, and satisfying requirement or coverage constraints. Xia et al. [42] applies clustering-based test suite reduction combined with evolutionary multi-objective optimization. Anwar et al. [3] combined genetic algorithm and particle swarm optimization algorithm to optimize regression test suites. Yoo et al. [43] employs a hybrid multi-objective genetic algorithm that integrates the efficient approximation of the greedy approach with the capability of population-based genetic algorithms to generate higher-quality Pareto fronts. While promising, existing techniques primarily focus on code-based solutions for minimization. In contrast, our work addresses minimization in the context of requirement-driven testing where test cases are specified in natural language and are thus introducing distinct challenges. Moreover, current approaches do not explicitly support TSM under a fixed budget while simultaneously enforcing full requirement coverage as a hard constraint. To the best of our knowledge, this is also the first work to investigate the impact of redundancy levels on fault detection capability under such constraints, providing valuable insights for future research and practical applications. Note that we retained FAST-R as one of the baselines though it cannot simultaneously satisfy both the fixed-size minimization budget and the full coverage adequacy constraint. This is because FAST-R can be applied in two scenarios (i.e., fixed-size budget and adequate), which enables experimental comparison with our approach.
As software systems evolve, test suites tend to grow in size and often contain redundant test cases. Such redundancy increases testing effort, time, and cost. Test suite minimization (TSM) aims to eliminate such redundancy while preserving key properties such as requirement coverage and fault detection capability. In this paper, we propose RTM (Requirement coverage-guided Test suite Minimization), a novel TSM approach designed for requirement-based testing (validation), which can effectively reduce test suite redundancy while ensuring full requirement coverage and a high fault detection rate (FDR) under a fixed minimization budget. Based on common practice in critical systems where functional safety is important, we assume test cases are specified in natural language and traced to requirements before being implemented. RTM preprocesses test cases using three different preprocessing methods, and then converts them into vector representations using seven text embedding techniques. Similarity values between vectors are computed utilizing three distance functions. A Genetic Algorithm, whose population is initialized by coverage-preserving initialization strategies, is then employed to identify an optimized subset containing diverse test cases matching the set budget. We evaluate RTM on an industrial automotive system dataset comprising $736$ system test cases and $54$ requirements. Experimental results show that RTM consistently outperforms baseline techniques in terms of FDR across different minimization budgets while maintaining full requirement coverage. Furthermore, we investigate the impact of test suite redundancy levels on the effectiveness of TSM, providing new insights into optimizing requirement-based test suites under practical constraints.
[ "cs.SE" ]
# 1 Introduction The 2023- Israel-Hamas War (the war) began following a surprise attack on Israeli military targets and civilians by Palestinian militant groups operating from within the Gaza Strip on October 7, 2023. Over 1,200 Israeli civilians, military personnel, and internationals were killed and another 254 Israelis and internationals were taken hostage [1]. That day, Israel began an aerial warfare campaign in Gaza followed by ground invasions, resulting in widespread urban destruction and the deaths of over 55,000 Palestinians as of writing [2]. The aerial warfare campaign in Gaza has been described by military historians as one of the most intense conventional bombing campaigns since WWII [3]. Widespread urban destruction resulting from aerial bombardment has direct impacts on civilians, causing conflict-induced population displacement, affecting routes for safe passage, challenging the ability for humanitarian organizations to conduct recovery and relief work, and inducing long-term impacts to environment, public health, and economies [4]. Tracking armed conflict-induced damage is integral to support decision-making for humanitarians responding to conflict events and enable broader public understanding of conflict impacts while wars play out. The dynamic, fast-paced, and reported high-intensity of damage across Gaza is difficult to characterize, requiring a low-latency and sustained mapping of damage across the duration of the ongoing war. Figure 1: Study setting. The left panel shows the study setting of the Gaza Strip with its five governorates. Inset panels (top) show VHR optical imagery before and during the war in the Beit Lahia area of Northern Gaza. Inset panels (bottom) show VHR optical imagery before and during the war in central Rafah. Armed conflict in Gaza has important parallels with other recent wars, which pose similar challenges in sustained urban damage monitoring. Like other conflict settings such as Ukraine and Syria, conflict in Gaza is characterized by widespread urban fighting – and resulting damage – to built-up areas. Difficulties accounting for damage across conflict settings involves a general lack of field-based data due to the dangerous (life threatening) situations on the ground. Conflict damage can occur over large spatial extents and long durations, which makes tasking of VHR imagery for detailed damage assessments difficult and introduces unique technical considerations associated with damage monitoring using medium resolution satellite data [5, 6, 4]. The landscape of Gaza is unique to other conflict settings due to the densely built-up nature of its urban areas within a small spatial extent $\mathrm { 3 6 5 ~ k m ^ { 2 } }$ ). The rapid and intense bombardment of the Gaza Strip over a realtively small area is also unique to other conflict settings, and underscores the need for active and sustained monitoring with low-latency to consistently capture the progression of damage over time. The high density and three dimensionality of Gaza’s built-environment introduces geometric artifacts, distortions, and blind-spots in remote sensing imagery, challenging photo-interpretation of VHR optical imagery, which requires visibility around structures to classify forms of damage short of full or partial building collapse [7]. Urban damage due to armed conflict is increasingly assessed using remote sensing data. Academic literature on automated approaches for mapping armed conflict-induced building damage tends to be event-specific and rely on VHR sensor data, with case studies focused on individual towns or cities [8]. The focus on commercial VHR data limits the degree to which Earth observation (EO) data can be widely and transparently used to better understand conflict impacts [5, 4]. Interferometric synthetic aperture radar (InSAR) coherent change detection (CCD) approaches are well-suited for mapping damage in urban contexts due to sensitivity in detecting structural shifts in built-up areas, where changes in complex radar signal characteristics can detect damage not always photo-interpretable in overhead optical imagery; regardless of cloud-cover or solar illumination conditions [9]. While CCD has been applied extensively in disaster contexts [9, 10], and in a handful of individual towns and cities affected by armed conflict (e.g., [11, 12]), it has only once been applied to long-duration monitoring of conflict-induced damage across a nationwide extent with a case study in Ukraine [6]. This case study produced data on the timing and location of damage nationwide with three months latency to results. In Gaza, three months latency is not conducive to support decisions on the timescales of humanitarian recovery and response or reporting on the impacts of conflict by civil society and journalistic organizations. A sustained monitoring approach with low-latency is necessary to capture impacts from the fast-paced dynamics in this ongoing war. In this study, we analyze 321 openly-accessible Sentinel-1 SAR images acquired before and during the war with a cloud-based InSAR processing workflow to produce over 3200 coherence images and monitor for indicated building damage using a long temporal-arc CCD (LT-CCD) approach. LT-CCD introduces formation of long temporal-arc InSAR pairing to CCD, drawing from knowledge on seasonal [13] and geometric [14] considerations for coherence estimation. LT-CCD constructs two stacks of single reference coherence images with matching temporal baselines across each stack and strict spatial (perpendicular) baseline criteria. Coherence image stacks for wartime and baseline periods are formed using single reference images acquired at similar times of the year and with minimal orbital offset (spatial/perpendicular baselines), secondary images acquired during pre-war periods, matching the distributions of temporal baselines for each stack, and reducing to stack statistics (mean and standard deviation) that are subsequently used for CCD [15, 6, 16]. We categorize damage by detecting acute and sustained reductions in coherence during the wartime monitoring period. We conduct damage monitoring during the wartime period and, to assess the occurrence of false positives, we conduct damage assessments ”in reverse,” using Sentinel-1 data for coherence estimation acquired two years prior to the conflict. We aggregate pixel-level damage data to 330,079 building footprints identified using pre-war VHR optical imagery [17] and report damage severity as a percentage of buildings damaged at each time step in monitoring. We compare LT-CCD damage with 928,397 damage event times and locations produced by the United Nations Satellite Center (UNOSAT) through visual interpretation of VHR satellite optical imagery across twelve dates between October 2023 and September 2024. We construct timelines of damage with LT-CCD and report spatiotemporal agreement with UNOSAT locations over time while examining reasons for agreement and disagreement. With LT-CCD, we map the destruction of Gaza over the first year of the conflict with approximately weekly temporal fidelity and report on the progression of damage over time as it relates to the progression of major conflict events. Preliminary results from this work were distributed to the global press and international humanitarian organizations over the first year of the war with low-latency and broad uptake, informing on the extent of damage in Gaza when commercial image providers limited data availability [18] and access to the field was restricted. We conduct this analysis with additional retrospective agreement exercises, more complete building footprint data, and full methodological transparency. # 2 Results # 2.1 Agreement with UNOSAT We define areas valid for LT-CCD monitoring using pre-war coherence statistics (Methods Section 5.3.4) and detect 694,831 UNOSAT damage locations in areas valid for monitoring with an overall agreement of $9 2 . 5 \%$ , true positive rate (TPR) of $8 6 . 2 \%$ , false positive rate (FPR) of 1.2%, F1 score of $9 1 . 8 \%$ , and CSI of $8 5 . 2 \%$ . For all UNOSAT times and locations of damage including those outside of areas valid for LT-CCD monitoring, our approach has overall agreement of $8 6 . 9 \%$ , TPR of $7 4 . 8 \%$ , FPR of $1 . 1 \%$ , an F1 score of $8 4 . 8 \%$ , and a CSI of $7 4 . 0 \%$ (Supplementary Table 8). False positives are acceptably low and we capture the vast majority of UNOSAT locations over time. Agreement increases with the magnitude of pre-war coherence values (Fig. 2). The median pre-war coherence value where true positives are detected is relatively high (0.69), whereas the median pre-war coherence value where we miss detection of UNOSAT locations is lower (0.45). UNOSAT locations that go undetected with our methods tend to occur in areas where SAR signal variability drives lower pre-war baseline coherence magnitudes, which is related to the pre-war built-up density, construction activity, and land management practices (Fig. 13). The median built-up area percent where we detect UNOSAT locations at the pixel level is $3 4 \%$ , whereas the median built-up area where we miss UNOSAT locations is 19%. In total, UNOSAT reported 163,778 locations of damage as of 6 September 2024, the last UNOSAT survey during the study period. By evening local time on 5 September 2024, we detect 175,691 locations of building damage in Gaza. Because UNOSAT does not release its reference building footprint data and only produces point locations to represent damaged buildings, we cannot determine whether this difference in aggregate damage is due to how UNOSAT defines building outlines or whether this is due to other factors. Figure 2: Bivariate chloropleth of agreement (F1 score) between LT-CCD and UNOSAT by prewar coherence ( $\gamma$ ) magnitude. Areas styled in white are invalid for LT-CCD monitoring based on pre-war $\gamma$ characteristics. # 2.2 Damage over time Damage detection at weekly temporal fidelity reveals regional conflict dynamics (Figs. 3 and 4). For the first 6 weeks of the war and before the temporary ceasefire, we detect on average 2,300 new buildings damaged per day, with damage mainly focused in the northern areas of Gaza. At the war’s onset, aerial bombardment was focused in North Gaza and Gaza City, followed by Israeli military ground invasions beginning on 27 October 2023. In the North Gaza and Gaza City governorates, over half of all buildings were damaged by the end of November 2023. During this phase of the war, ground invasions were limited to these two northern governorates while airstrikes took place across all governorates including in the central and southern areas of Gaza. Figure 3: Aggregate damage detected across the duration of the study period with building footprints colored by the month where damage is first detected. In the last week of November was a six-day temporary ceasefire and hostage exchange from 24-30 November 2023 [19, 20]. Relative calm during the temporary ceasefire is evidenced by a $7 8 \%$ decrease in the rate of new damage detected compared to the first six weeks of the war across all of Gaza (Table 1). In the southernmost Khan Younes and Rafah governorates, the decrease in new damage detected during the temporary ceasefire is slightly more pronounced than in other governorates. Because Sentinel-1 acquired data on 22 and 29 November 2023, we were able to capture this notable slowdown in new damage during the brief respite from fighting. Due to the timing of Sentinel-1 acquisitions, we do not detect a total pause in damage, as airstrikes and fighting continued from the 22nd to the 24th of November [21] and Sentinel-1 did not make acquisition to precisely capture the six-day ceasefire period. Figure 4: The percentage of buildings damaged over time. Cumulative damage is aggregated across each governorate and the Gaza Strip as a whole. Ground invasions are indicated with vertical red lines. The invasions of North Gaza and Gaza City begin on 27 October 2023. The 24-30 November 2023 temporary ceasefire is shaded in grey. The beginning of the battle of Khan Younes is marked in red on 1 December 2023 along with the encircling of Deir al-Baleh on 26 December 2023, and the invasion of Rafah on 6 May 2024. By the end of the temporary ceasefire, 10.5% of Deir Al-Baleh, 10.3% of Khan Younes, and $5 . 2 \%$ of Rafah were damaged. Rates of new damage increase markedly in Khan Younes corresponding with the onset of intensified bombardment and an Israeli military ground offensive that captured the city center in the first week of December [22]. Shortly after the Israeli capture of Deir Al-Baleh, the Israeli military announced that it was pressing into the adjacent city of Khan Younes [23], where we detect a notable increase in new damage beginning in January 2024 when the Israeli military fully encircled the city after weeks of battle [24]. By mid-March 2024, over half $( 5 0 . 1 \% )$ of buildings in Khan Younes and $3 7 . 5 \%$ of buildings in Deir Al-Balah were damaged. At this time, the southernmost Rafah governorate, which had been a target of aerial bombardment throughout the conflict, had 17.8% of buildings with damage but was the only remaining governorate where an Israeli ground invasion had not occurred. New damage detection in Rafah increases markedly following the beginning of an Israeli military ground invasion on May 6, 2024. Less than two months later, the percentage of buildings damaged in Rafah more than doubled to 40.9%. Table 1: Percent decrease in the rate of new damage detected during the temporary ceasefire relative to the first six weeks of the war. In aggregate, LT-CCD classifies $5 7 . 9 \%$ (191,263) of all pre-war OSM buildings mapped in Gaza as likely damaged or destroyed through the first year of the war (Fig. 3). $6 9 . 5 \%$ of buildings in North Gaza, 73.9% in Gaza City, $4 6 . 2 \%$ in Deir al Baleh, $5 3 . 5 \%$ in Khan Younes, and $4 6 . 4 \%$ in Rafah were likely damaged. Areas with low magnitude, variable pre-war coherence, corresponding to areas with low built-up density, were classified as invalid for monitoring, constituting $1 4 . 4 \%$ (47,516) of OSM building footprints (Table 2). As a percentage of building footprints in areas valid for LT-CCD monitoring, over two thirds (67.8%) of all buildings in Gaza were damaged. Table 2: Mean pre-war $\gamma$ and OSM built-up density in areas valid and invalid for LT-CCD monitoring. # 2.3 Accounting for confounding factors We assess the overlap in probability distributions between all three epochs in monitoring (prewar baseline, counterfactual, and wartime periods) using the Hellinger distance metric. There is substantial, approximately 90%, overlap within Gaza and in southern Israel in coherence value distributions assembled outside of wartime periods (Fig. 5, Table 3). In southern Israel, distributions of coherence values have substantial overlap across all three epochs where coherence estimates were formed, indicating a lack of broad changes in coherence outside of Gaza during the war. In contrast, coherence distributions within the Gaza Strip show a marked difference during the wartime monitoring period relative to the pre-war and counterfactual periods with Hellinger distances over 0.65, indicating that two thirds of the probability distributions no longer overlap with pre-war coherence data. This is indicative of the extensive damage to built-up areas characterized in this study. Figure 5: Average coherence across all three monitoring epochs. (A) The average coherence magnitude (γcounterfactual) across all coherence grids used in the counterfactual monitoring period. The Gaza Strip is outlined in red and the pseudo-stable area in Israel is outlined in black. (B) Average coherence across all images used during the baseline pre-conflict period $( \overline { { \gamma } } _ { p r e } )$ . (C) Average coherence across all images used for monitoring during the conflict period $( \overline { { \gamma } } _ { c o n f l i c t } )$ . (D, $\mathrm { E }$ and F) Probability distributions of coherence values for the Gaza Strip and the pseudo-stable area in Israel pertaining to data in panels A, B, and C respectively. Table 3: Hellinger distances between mean coherence during the pre-war baseline, conflict monitoring, and counterfactual periods within Gaza and in the southern desert of Israel. We test for sustained decorrelation after initial damage detection by accounting for the time between initial and secondary confirmation of damage signals. We find that $7 9 . 5 \%$ of initial damage detections are confirmed within one month (31 days) of the first date of detection. Initial damage detection is most frequently confirmed ( $6 3 . 6 \%$ of the time) within 12-days, which is the temporal lag time over which Sentinel-1 acquires imagery with the same orientation of illumination. On any given monitoring date, damage detected that remains unconfirmed within one month does not exceed $3 ~ \mathrm { k m ^ { 2 } }$ in area, amounting to 3.2% of damage detected at that date in monitoring. While time to confirmation of initial damage varies, 99% of all initial damage is confirmed within 176 days and all initial damage is eventually confirmed by the end of the study. We report agreement metrics with UNOSAT above before applying any persistence criteria, and the false positive rate is acceptably low as a result. Figure 6: Cumulative distribution function of the number of days between initial and secondary damage detection. The horizontal and vertical dashed red lines indicate the probabilities that initial damage is confirmed after 12 $( 6 3 . 6 \% )$ , 31 $( 7 9 . 5 \% )$ , and 176 days $( > 9 9 \%$ ). # 3 Discussion Most of the damage detected in this study occurs within the first nine months of the war. The first three months of the war had the highest rates of new damage. This pace of damage detection has almost no precedent in peer-reviewed literature on EO-based monitoring of damage during aerial bombardment campaigns aside from the most severely-affected towns and cities in Ukraine damaged during the 2022 Russia-Ukraine conflict [6]. To our knowledge, the severity of damage in Gaza compares only the some of the most severe wartime damage in modern history dating back to WWII. In the German city of Dresden, for example, 54% of buildings were damaged or destroyed due to Allied bombardment [25]. Across 51 German cities subject to Allied bombardment from 1942-1945, $4 0 – 5 0 \%$ of urban areas were damaged or destroyed, amounting to an estimated 10% of the total German building stock [26]. In Gaza, the entire enclave has a damage severity comparable only to some of the most heavily damaged localities in the Russia-Ukraine conflict or WWII. This severity of damage, however, occurs across the entire extent of the Gaza Strip, which means that this damage is not just severe in some localities but it is inflicted across the broader administrative unit. Geographic scales of damage relate to the pixel resolution EO data for wartime damage monitoring. The pixel resolution of data used for LT-CCD here is consistent with the radii of building destruction from one of the most commonly-deployed bombs in Gaza [27]: the Mark 82 munition, which carries 89 kg of explosive [28]. Detonation of the Mark 82 bomb on a flat surface will collapse most buildings and severely damage concrete structures within a 31m radius; an area approximately two times the 40m pixel spacing used in this study [28]. The total number of bombs, by type, dropped in Gaza is not publicly known. By mid-December 2023, a United States intelligence leak reported in the media stated that 29,000 air-to-ground munitions had been deployed in Gaza [29]. Hundreds of these air-to-ground munitions were much more destructive and lethal one ton (Mark 84) bombs [29, 27, 30]. As a back-of-the-envelope estimate, an area of $8 8 ~ \mathrm { k m ^ { 2 } }$ would be damaged or destroyed if all munitions were Mark-82 and did not overlap. By mid-December 2023, we detect $7 9 ~ \mathrm { k m ^ { 2 } }$ of damaged area gridded at 40m pixel spacing. An approach to map damage at 30-50 centimeter scale, akin to UNOSAT-style assessments, is not as relevant to the geographic scale of conflict damage processes, which can be captured persistently over time with this medium-resolution approach. Within areas with sufficiently high pre-war coherence characteristics, our approach drastically reduces latency from weeks for UNOSAT assessments to hours with LT-CCD for more timely and actionable data that can subsequently prioritize tasking of follow-on VHR imagery for higher-fidelity assessment of damage at the level of individual buildings. The net difference in the number of buildings detected by our methods versus UNOSAT could be explained by the difference in pre-war reference building footprint vector data, of which UNOSAT does not make publicly available, or it could be due to detection of damage in densely built-up areas. UNOSAT-style methods are known to lack sensitivity when damage occurs without evidence readily visible at the rooftop in VHR imagery, with errors of omission of $2 6 . 8 \%$ where UNOSAT data have been validated in one post-earthquake setting [31]. Examples of lateral damage, including plastic deformation of buildings, residual drift, progressive collapse [32], and damage to structural facades [33] are instances where objects that scatter radar echoes may lead to CCD detection of damage that UNOSAT omits [9]. Since neither LT-CCD or UNOSAT data are field validated, we can consider that LT-CCD may be capturing data that UNOSAT omits in areas where visibility is occluded, while LT-CCD with C-band SAR is expected to miss damage in areas where mixed-pixel landcover effects on radar scattering drives lower coherence. Merging LT-CCD with VHR optical data can form more complete EO-based damage estimates by considering the strengths of CCD for capturing lateral forms damage [33] in areas with high pre-war coherence and UNOSAT capturing damage in less densely-built areas with more variable radar scattering. We merged unique building locations identified by LT-CCD and UNOSAT to develop a more comprehensive assessment of built-up area damage. In total, over 214,000 buildings are likely damaged or destroyed at the end of the study period, constituting 63.99% of all OSM mapped buildings. This represents an upward revision of $1 2 \%$ from our initial estimates and a marked $3 1 \%$ increase from the UNOSAT estimates. Public release of reference building footprint data used by UNOSAT would enable transparency and reproducibility of damage estimates for assessing strengths and limitations of either approach, but the present lack of publicly-available reference building footprint data from UNOSAT limits independent assessments of differences in each approach at the individual building footprint level. It is difficult to untangle the multivariate drivers on long temporal-baseline coherence and factors affecting the detectability of damage. Vegetation lowers coherence generally [34] and the resulting ability to detect damage. Buildings with larger footprints are more reliable to monitor for damage than smaller buildings [35]. Variation in the orientation of an urban grid can change dominant scattering mechanism from double-bounce to volumetric scattering [36]. Volume scattering in urban areas may help to explain the sensitivity of CCD approaches to lateral damage to building facades [33]. Disambiguating the different influence of coherence signal drivers such as vegetation presence, built-up density, sensor, urban grid orientation, scattering mechanisms, and seasonal scattering variability can contribute toward better understanding of drivers of urban coherence characteristics over space and time. Characterizing these multi-variate drivers of damage detectability can help to better understand the detectability of coherence across geographic settings. The occurrence of repeat damage - where an area is struck more than once over time - may register as damage initially but, as a war continues, more strikes may occur in the same image region ostensibly increasing the severity of damage within that pixel. Reports from the news media document increasing damage over time, with one example from the British Broadcasting Service (BBC) [37]. Using photos from the ground of the same area over the first six weeks of the war, the BBC highlights a mosque and surrounding buildings with increasing levels of damage visually evident in Beit Lahia, North Gaza. Our methods detect initial damage at this site but, as the site is struck more over time, increasing severity of damage is not detectable with our methods due to the binary nature of damage classification. Conceptualizing a notion of increasing damage severity at the level of individual pixels will be important to capture signals of repeat damage in future and ongoing work; and may be a path forward given the limitations of medium-resolution SAR for capturing individual building-level damage severity [35]. While opportunities remain for increasing the fidelity of CCD approaches for mapping repeat and increasing severity of damage across geographic settings, the LT-CCD approach operationalized in this study offers several advantages over bi-temporal and fixed-temporal baseline CCD approaches developed for disaster contexts (e.g., [15]). With LT-CCD, all coherence estimation is conducted using secondary SAR imagery acquired before the war onset, which makes testing for persistent decorrelation possible [6]. A fixed-temporal baseline approach agnostic to the conflict onset precludes the ability to test for signals of persistent decorrelation relative to a pre-war period. LT-CCD also allows for strict considerations of spatial baselines across InSAR pairs used for coherence estimation. With LT-CCD, we can restrict InSAR pairing to acquisitions with short spatial (perpendicular) baselines and help to mitigate potential effects of spatial decorrelation due to large differences in the point of illumination along the SAR orbit track [14]. Additionally, bi-temporal CCD approaches commonly apply an approach for ”histogram matching” of coherence values for pre- and post-event coherence estimates [16, 33]. With such widespread damage in Gaza and a relatively small area of interest, a presumption of stability to alter coherence values during the conflict isn’t necessarily appropriate. Instead, we opt to match the histograms of temporal baselines used for InSAR image pairing at each timestep in monitoring and for each stack of coherence images reduced to central tendencies and used for CCD (Methods Fig. 11). We find that the formation of image stacks using reference images acquired around the same time of each year, at a similar orbital vantage points, and secondary images with the same distribution of temporal baselines does well to create similar coherence estimates when reduced to central tendencies, as expressed through the Hellinger distance comparisons above. Another important benefit of the LT-CCD approach is to mitigate for signals of wartime construction. In Gaza, as the Israeli military conducted ground invasions, it also built military checkpoints and fortifications, including in areas it termed the Netzarim and Philedelphi corridors in central and southern Gaza. Because our long temporal-arc approach for estimating coherence only utilizes secondary images from the pre-war period, we are only sensitive to tracking areas with longterm pre-war coherence characteristics conducive to CCD. This means that, if an area is damaged during the war and subsequently built-up with military fortifications, these military installations should not falsely register as damage because decorrelation of SAR signals relative to the beginning of the war has already been mapped as damage. As the war drags on, the gradual decay of coherence over time will lower the magnitude of mean coherence across an image stack and across the region (e.g., [38]). Because the Israel-Hamas war is now in its second year, and because we have demonstrated that LT-CCD is robust to temporal decay phenomena for damage detection over a year of monitoring, this approach can be operationalized for monitoring longer than one-year durations by adjusting the pre-war benchmarking period after each year of monitoring. For example, pre-war reference periods for coherence estimation after the first year of fighting can shift to the second year, and all new damage can be reported as it was detected in the second year of monitoring as a strategy to mitigate for increasing temporal baselines. The importance of continued monitoring, regardless of the reported state of the conflict, is not only integral to capture damage but also to monitor ceasefire effectiveness. In November of 2023, we capture a slow-down in new damage detection during the six-day temporary ceasefire and hostage exchange. This underscores another important factor for high temporal fidelity and active monitoring of damage during ongoing armed conflicts. As ceasefire agreements are put in place, approaches like ours can monitor for signals representative of new damage during ceasefire periods. Physically-based and transparent data produced agnostic of any political actors on the ground is important to document potential violations of ceasefire agreements. These damage estimates can be generated within hours of a Sentinel-1 image down-link. The new temporal fidelity of rapidly-generated data on wartime damage to built-up areas is necessary for timely decision support at humanitarian organizations coordinating civilian recovery and response activities in war-torn areas. Timely insights enable public understanding of conflict impacts as communicated through journalistic organizations, to assess knock-on effects such as rubble accumulation, dust exposure, conflict-induced migration, and to characterize broader long-term environmental implications of armed conflict. Low-cost approaches utilizing open source EO data can improve transparency and reproducibility and contribute to the better use of EO data for more holisitically understanding the landscape impacts of war and conflict. # 4 Methods # 4.1 Geography The Gaza Strip (Gaza) is located on the eastern boundary of the Mediterranean Sea and is besieged [39] by Israel to the north, east and west and bordered by Egypt to the south (Fig. 7). There are approximately 2.2 million people in Gaza as of 2023 distributed across an area of $3 6 5 ~ \mathrm { k m ^ { 2 } }$ , making the enclave one of the most densely-populated areas in a continental setting on Earth [40]. In 2015, $1 4 . 5 6 \%$ of the Gaza Strip $( 4 3 . 5 9 \ \mathrm { k m ^ { 2 } }$ ) was built-up with human settlements [41]. The average density of built-up area per hectare is $1 9 . 8 \%$ with a standard deviation of $1 { , } 8 0 0 ~ \mathrm { m } ^ { 2 }$ [41]. Intermixed perennial and annual vegetation is distributed within otherwise densely built-up areas of the Gaza Strip. On the periphery of densely built-up urban areas are agricultural regions. In peri-urban and rural areas outside of major urban centers, agriculture and farming are the primary land uses [42]. These agricultural regions include infrastructure such as greenhouses within and between agricultural plots. Figure 7: The Gaza Strip and its five governorates outlined over data on built-up density from the 2023 Global Human Settlement collection [41]. An inset map in the lower right panel outlines the inset zoom in red and includes recognized international boundaries [43]. # 4.2 Climate and Environment Gaza is in a Mediterranean climatic environment consisting of generally warm and dry summer months with average daily surface air temperatures above 25ºC from June - August [44]. Rainfall occurs from October through May, with the most average annual rainfall arriving between December and January [44]. Rainfall each year accumulates to 35 cm on average [44]. Following seasonal precipitation events, there is green-up of annual vegetation that contrasts with the dry season months where greenness is limited to otherwise sparse evergreen perennial vegetation. Governorates to the south are slightly more arid than the north. Episodic, annual rainfall occurs in events that cause surface water runoff, flooding, and erosion [45]. Together, the annual green-up of vegetation [46], changes in soil moisture, episodic rainfall [47], and erosion [48] will drive changes in radar scattering characteristics that may confound the detection of damage. # 4.3 Characteristics of urban damage War-related damage to buildings in the Gaza Strip has several causes including due to direct aerial impact, ground-based fighting, controlled-demolitions [49], and damage from blast waves and ejecta from nearby detonations [50]. Aerial bombardment has reportedly included the use of thousands of conventional Mark-82 227 kg bombs [27], which, when exploded at the surface, have a blastwave radius where unimpeded structures within 31 m are destroyed due to combined effects from overpressure as blastwaves propagate and the resulting under-pressure in the blastwave wake [51, 52]. Heavier 500 kg Mark-83 and 1,000 kg Mark-84 bombs have also been dropped across Gaza at least several hundred times [30, 53], delivering much more destructive force with wider blast and lethality radii than the Mark-82 [27]. # 4.4 Data # 4.4.1 Sentinel-1 interferometric coherence data products InSAR coherence ( $\gamma$ ) [54], a derivative of common InSAR processing, is often used to assess the reliability of InSAR measurements for phase change analysis applied to elevation or deformation mapping [55, 56, 57, 58]. Coherence has also become a metric used for operational building damage mapping in disaster contexts, but can be prone to non-damage drivers of coherence loss (decorrelation). When sensor and dielectric changes do not limit the reliability for monitoring areas with otherwise stable radar scattering characteristics, $\gamma$ is sensitive to subtle types of building damage not always visible in overhead optical imagery [59]. As a result, $\gamma$ has become the metric utilized by civilian spaces agencies and research groups to map damage from geohazard and extreme weather events with bi-temporal (before/after) change detection [60]. We use data from the Sentinel-1 constellation [61] to map indicators of damage. The constellation, with the first launch of Sentinel-1A in 2014, is a C-band (5.405 GHz, 5.6 cm wavelength) sensor and constitutes the first regular repeat satellite SAR system with openly-accessible data freely available to the public. Sentinel-1B was launched in 2016 and, together with Sentinel-1A, generated a record of 6-day repeat acquisitions across most land areas on Earth. Sentinel-1B was decommissioned in December of 2021 following failure of onboard equipment and left Sentinel-1A as the only satellite in the constellation to acquire data during the Israel-Hamas war. Nonetheless, Sentinel-1B acquisitions prior to the onset of the 2023- Israel-Hamas war are still important sources of data for coherence estimation. Beginning in April of 2024, a thruster anomaly on Sentinel-1A caused mission controllers to relax the precision of an orbital tube diameter leading to larger than normal ground track deviations in acquisitions [62]. This drove an increase in perpendicular baselines for some acquisitions upwards of several hundred meters beyond average [63]. With perpendicular baselines greater than several hundred meters, coherence changes over urban areas can be dominated by the influence of spatial baseline decorrelation [64]. While Sentinel-1 acquired 97 images in total during the conflict period, we limit our monitoring using the 61 Sentinel-1 images described above and omit the rest of the record due to a combination of large ground-track deviations from average and finite computing resources. Nonetheless, the average frequency of temporal revisit over Gaza using these 61 images is conducive for pseudo-weekly monitoring of indicated damage across the first year of conflict. The Alaska Satellite Facility’s Hybrid Pluggable Processing Pipeline (HyP3) [65] is an ondemand cloud-based SAR processing API that includes interferometric processing with an instance of the GAMMA software [66]. We tasked HyP3 with custom job lists to process interferometric data products and generate gridded coherence estimates used in this study. Cloud-based InSAR processing tools like HyP3 enable scaling of CCD approaches with multitemporal stacks of data available at each time step in monitoring, which would otherwise not be feasible to conduct in a timely manner with the need to download and process dozens of images on a typical desktop computer. Interferometric processing at HyP3 allows users to choose the window size within which coherence is estimated across neighboring pixels, where 10x2 pixels in range and in azimuth (radar geometry) is the highest resolution available. This results in coherence data products with 40m pixel spacing and 80m nominal resolution, which we employ in this study. More information on the Sentinel-1 InSAR coherence products that we utilized in this study is available in the Hyp3 documentation [65]. # 4.4.2 HOTOSM Building Footprint Data Building footprints are used to restrict monitoring to regions with built-up structures known to be present before the start of the war and quantify the total number of buildings damaged within Gaza and its constituent governorates. The Humanitarian Open Street Map (HOTOSM) team [67] began an update to its layer on building-footprint data over Gaza following the onset of the war. The effort to complete the record included dozens of volunteers manually outlining building footprints from nadir viewing satellite optical imagery captured between 2019-2023 and sourced from Bing [17]. The updated records consist of 330,079 unique vector outlines representing building footprints distributed across the five governorates (Fig. 8) and with an average building footprint area of $\mathrm { 1 6 0 ~ m ^ { 2 } }$ . To our knowledge, these data constitute the most up-to-date manually-delineated building footprint dataset available as a pre-conflict reference over Gaza. Figure 8: OSM building footprints with the count of building footprints in each governorate. # 4.4.3 UNOSAT damage locations Damage locations visible in VHR satellite optical imagery are used to quantify agreement metrics against locations of radar-indicated damage. The United Nations Satellite Agency (UNOSAT) released nine sets of geospatial point vector data representing interpreted degrees of building damage. UNOSAT data are widely ingested by journalists and humanitarians in conflict settings, have been used as ground truth data in some studies (e.g., [68, 31]), are not field validated, and are known to have errors of omission because not all forms of damage are visible to the naked eye in VHR optical data [31]. Nonetheless, these data constitute perhaps the most complete and publicly-available representation of damage to buildings in Gaza across the study period. Point-level damage data mapped by UNOSAT include 163,778 locations of aggregate damage as of September 3-6, 2024 following analysis of 30 cm and 50 cm resolution optical data acquired by the Worldview-2 and Pleiades sensors (Fig. 9). UNOSAT generates these ”comprehensive damage assessment” (CDA) data by labeling locations with four primary levels of interpreted building damage severity. These interpreted levels of damage severity have no basis in structural engineering for damage classification. Instead, these labels are intended to classify damage severity by categorizing the extent to which damage is visible in VHR imagery acquired overhead or slightly off-nadir. UNOSAT reports damage with labels corresponding to interpeted damage severity. The most severe category of UNOSAT-interpreted damage is labeled as ”destroyed” if the analyst interprets at least half of a structure to be collapsed [7]. ”Severe damage” is labeled if a roof has fully collapsed or parts of a structure are visibly collapsed. ”Moderate damage” is assigned if an analyst observes damage to a rooftop and the emplacement of ”large debris/rubble or sand deposits” around a building [7]. A final category of ”possible damage” is assigned if ”small traces of debris/rubble or sand” is emplaced adjacent to a building. Possible damage is also labeled if damage interpretation is uncertain, or if a building is surrounded by damaged or destroyed buildings. For the first 6 UNOSAT CDA data releases, the ”possible damage” category was omitted. Beginning in April of 2024, the ”possible damage” category was added to the UNOSAT CDA data releases and marked a 39% increase in the number of buildings labeled as damaged by UNOSAT between March and April. The possible damage category was then added retrospectively to all prior assessments, generating a record with consistent labels across the study period. UNOSAT generated nine CDA surveys in total across the study period, delineating 928,397 locations over time (Table 4). For three surveys, where VHR optical scenes analyzed by UNOSAT did not fully cover the Gaza Strip, images from two different dates with 1-3 days of lag time between acquisitions were combined to generate full coverage over Gaza. UNOSAT combined data from images acquired on January 6 and 7, 2024 to produce one Gaza-wide assessment in January 2024, similarly combined data from the end of March and early April for the April release, and finally combined data collected in early September for the final survey released during the study period (Table 4). It is novel for UNOSAT to conduct repeat surveys across the same extent and at somewhat regular time intervals. UNOSAT released nine surveys during the study period with roughly one month of latency between image acquisition and publication of results, constituting a robust spatiotemporal set of point-level data on damage labels to compare to our automated CCD damage detections. Three of the surveys utilized VHR optical images acquired on different dates, but within a few days of each other (Supplementary table). We utilize these spatiotemporal UNOSAT data at each specific image date to report agreement in regions valid for CCD monitoring (Table 7) and for all UNOSAT locations including those outside of CCD-valid monitoring areas (Supplementary Table 8). Figure 9: UNOSAT data released during the study period aggregated to our 40m coherence pixel grid. All damage categories from UNOSAT are combined and grid cells with new damage are plotted in orange while grid cells with older damage are styled in blue. Table 4: Number of locations of damage or destruction reported by UNOSAT for each date of imagery analyzed. # 4.5 Long temporal-arc coherent change detection (LT-CCD) Our method uses two single reference images and two dozen pre-war secondary Sentinel-1 images to construct two stacks of coherence estimates used for CCD at each time step. We select InSAR reference scenes acquired from similar orbital vantage points and at similar times in a seasonal cycle – one during a pre-war period corresponding to each wartime acquisition. We design InSAR pairs to have matching distributions of temporal baselines across pre-war and wartime monitoring periods, and distill these coherence image stacks to stack-average and standard deviations for LT-CCD. We draw from concepts in two modalities of radar interferometry for our approach to CCD. In persistent scatterer (PS) interferometry [56], long term coherence characteristics across multitemporal coherence image stacks are used to identify regions considered stable for monitoring sub-centimeter scale deformation with phase changes [69]. Similarly, short baseline subsets (SBAS) interferometry [58] is designed to minimize the effect of spatial (or perpendicular) and temporal baseline decorrelation using sets of InSAR pairs with minimum temporal and spatial baseline offsets for phase change applications. For each overpass we analyze during the conflict monitoring period, we assemble single-reference image stacks of InSAR pairs using each image acquired during the conflict and images acquired before the conflict, similar to single reference image InSAR stack formation for PS estimation. We then sample this stack of conflict period coherence images for the mean coherence at each pixel across the stack of data $( \overline { \gamma } _ { t } )$ . Because of strong seasonal drivers related to coherence generally (e.g., [38]), we select a single reference image acquired around the same time during the year prior and with the smallest perpendicular baseline available to construct a stack of pre-conflict InSAR pairs 10). We select the acquisition with the shortest perpendicular baseline acquired acquired roughly one year before each conflict image acquisition. We intend to minimize differences in spatial baselines between pre- and conflict reference images while also using data acquired at a similar time of the year. Secondary InSAR images 2021 2022 2023 2024 Counterfactual Pre-conflict Conflict reference reference reference image image image InSAR pairing to match temporal baseline distributions Conflict Pre-conflict Counterfactual TemporalBaseline Coherence Coherence Coherence image stack image stack image stack Mean Mean and standard Mean coherence deviation of coherence coherence Coherent change detection Building damage assessment with multi-temporal SAR using the average of coherence characteristics estimated across numerous combinations of InSAR pairs captures damage to buildings more accurately than the use of single coherence images for change detection [16, 70]. We generate pixel-wise statistics on multi-temporal coherence characteristics as an approach intended to deal with exogenous drivers of coherence unrelated to building damage. This approach is informed by past work in space and time coherent change detection [70] and has other more recent precedent [16]. For change detection at each time step in monitoring, we form one stack of up to 25 coherence images using a single reference image acquired during the conflict and reduce that stack of coherence images down to an average value. We assemble a stack of coherence images with a reference from a pre-conflict period and similarly reduce that stack to the mean and standard deviation of coherence at each pixel. These pixel-wise summaries of coherence characteristics during each epoch are the data that we use to conduct CCD. # 4.5.1 Conflict, pre-conflict, and counterfactual periods We define three epochs to detect changes during the conflict and quantify metrics for agreement [6]. A conflict monitoring period constrains the duration for damage monitoring (12 October 2023 - 31 October 2024). A pre-conflict baseline period establishes reference coherence characteristics to compare against coherence characteristics during the conflict. A final pre-conflict counterfactual period during a time without active fighting constrains false positive and true negative rates of detection of damage. Instead of quantifying false positives and true negatives using hits and misses between CCD and ancillary data on building damage during the conflict directly, because no data are field validated and a true accuracy assessment is not possible in this study, the counterfactual monitoring period serves to quantify rates of true negative (no damage classified) and false positive (damage falsely classified) during a time when active fighting and bombardment was not taking place. We will refer to the conflict, pre-conflict baseline, and counterfactual periods throughout the remainder of the text accordingly. Sentinel-1A has three orbits with complete coverage over Gaza during the ongoing conflict: two ascending direction orbits (paths 160 and 87) and one descending orbit (path 94). In total, we consider 61 Sentinel-1 images with complete coverage over Gaza acquired during the conflict monitoring period. Each of these 61 images serves as a reference to form a total of 1,433 coherence images for the conflict monitoring period. We compare these conflict period coherence estimates to about as many coherence images assembled during the pre-conflict baseline period and, finally, quantify false positives and true negatives using coherence image stacks assembled during the preconflict counterfactual period (Table 5); where each counterfactual set of data corresponds to one of nine ancillary datasets on building damage to which we will assess for agreement against CCD. Table 5: Count of unique Sentinel-1 image acquisitions during each monitoring epoch and the number of total coherence images generated for each epoch. # 4.5.2 Matching distributions of temporal baselines As the conflict progresses, the temporal offset between conflict acquisitions used as reference images for InSAR pairing and pre-conflict secondary images increases. This shifts the distribution of temporal baselines for each conflict monitoring period stack further from the pre-conflict stacks as the conflict proceeds, likely driving pronounced coherence decreases due to temporal decay alone [54, 71]. To account for shifting distributions of temporal baselines over the study duration, we present an unconventional approach to InSAR pairing for CCD and assemble InSAR stacks using image pairs with matching distributions of the absolute value of temporal baselines across all epochs. This results in data across all three epochs with essentially identical distributions of temporal baselines and very similar distributions of perpendicular baselines (Fig. 11). Figure 11: Distributions of perpendicular baselines and the absolute value of temporal baselines. # 4.5.3 Optimizing the number of InSAR pairs at each time step At each time step we assemble no more than 25 coherence images for each pre-, conflict, and counterfactual coherence image stack. We find that twenty five images is sufficient to converge on general tendencies without the need for additional data under consideration. In all cases, the average number of coherence images that we assemble for each epoch and at each time step is two dozen images. Failure of InSAR processing at HyP3 during some time steps results in small differences in the total number of coherence images formed at each time step but, in all cases, the number of coherence images is greater than or equal to 15 images, which we identify as a lower bound important to minimize residual noise due a reduced sample of coherence images. We seek to balance the volume of data under consideration with the speed and ability to converge on a similar result and find that a stack of images will converge toward a long-term average coherence value with at least 15 images and that the addition of images beyond 25 is not necessary given the additional computational costs. # 4.5.4 Damage classification For damage classification, we adapt a CCD approach for conflict damage mapping over long durations [6] and compare mean coherence estimates from each multi-temporal stack of coherence images during the conflict monitoring period ( $\overline { \gamma } _ { t }$ ) to the mean $( \overline { { \gamma } } _ { p r e } )$ and standard deviation $( \sigma _ { p r e } )$ of pre-conflict coherence at each time step using a fixed threshold and z-score metric. We classify potential damage if coherence decreases are below a fixed threshold $( k )$ and if $\overline { { \gamma } } _ { c o n f l i c t }$ is two standard deviations below $( \overline { { \gamma } } _ { p r e } )$ (Eq. 1-3). Following initial detection of damage, we require that damage is detected again at least once during the following month of image acquisitions. This process is intended to remove false alarms from coherence decreases unrelated to signals of likely damage that do not recur across the record. $$ \Delta \gamma _ { t } = \overline { \gamma } _ { t } - \overline { \gamma } _ { p r e } $$ $$ z _ { t } = \frac { \Delta \gamma _ { t } } { \sigma _ { \gamma _ { p r e } } } $$ $$ D a m a g e _ { t } = \left\{ \begin{array} { l l } { { \Delta \gamma _ { t } < k } } & { { \mathrm { C C D ~ t h r e s h o l d ~ m e t } } } \\ { { z _ { t } < - 2 } } & { { \mathrm { z - s c o r e ~ t h r e s h o l d ~ m e t } } } \end{array} \right. $$ # 4.5.5 Classifying area valid for monitoring We quantify which areas are valid for CCD monitoring using the z-score metric from Eq. 2 relative to the fixed threshold used for CCD $( k )$ . If the fixed threshold used for CCD has a pixelwise z-score two standard deviations below $\overline { { \gamma } } _ { p r e }$ , that region is valid for damage monitoring. We illustrate the process for delineating image regions valid for monitoring in Figure 12. This mask is important to quantify the portion of UNOSAT data that falls within areas valid for CCD monitoring and assess how well CCD monitors areas also monitored by UNOSAT. In total, 282,009 $( 8 5 . 5 8 \%$ ) of OSM building footprints fall within areas valid for monitoring during at least two CCD surveys, which would be the minimum needed to be classified as damage. Additionally, CCD is valid for monitoring 86.7% (804,858) of UNOSAT locations over time. Figure 12: (A) Pixelwise mean pre-conflict coherence across a stack of images with a single reference image acquired on 17 December 2022. (B) Standard deviation of pre-conflict coherence across all coherence estimates generated using the same single reference image. (C) Areas valid for monitoring (white) and invalid (black) at this time step in monitoring. # 4.5.6 Accounting for number of buildings in damage-affected areas To tabulate the total number of damaged buildings, we calculate the fraction of each building footprint that is covered by a pixel labeled as damage and, if the fraction of the building footprint covering the pixel is greater than or equal to 99% of the building footprint area, we mark that building as likely damaged or destroyed. To report on the total fraction of buildings damaged or destroyed in the Gaza Strip and constituent governorates, we calculate the ratio of building footprints labeled with CCD damage and divide by the total number of buildings mapped in each governorate. We then report on the fraction of buildings damaged in each governorate over time across the conflict monitoring period. # 4.5.7 Agreement with UNOSAT data We use CDA data from UNOSAT to quantify agreement metrics across conflict and counterfactual monitoring period epochs (Table 6). We quantify false positives and true negatives using coherence data from the counterfactual epoch where damage due to conflict did not take place. Because we do not attempt to predict the degree of damage severity at the level of an individual building and because past studies have found that VHR optical imagery can only reliably discriminate between full and partial building collapse [72], we combine all UNOSAT categories of damage into one category of damage or destruction, similar to other studies comparing detected damage with UNOSAT data [31]. To quantify true positives and false negatives, we compare cumulative CCDindicated damage at each time step following the release of each UNOSAT survey in Gaza. For each comparison with UNOSAT data, we select the nine CCD surveys immediately following the date of each image acquisition used for UNOSAT and accumulate the total damage indicated by CCD during both conflict and counterfactual periods up to the date of each UNOSAT survey; generating estimates of cumulative damage up through the point in time that each UNOSAT survey was conducted. To quantify false positives and true negatives, we aggregate all damage detected across the counterfactual period in an identical manner. We do not apply any criteria for persistence of CCD-damage signals in these agreement exercises because we seek to assess the performance of this approach for near-real time reporting on conflict impacts. The persistence criteria is intended as a quality-control mechanism for harmonized and retrospective damage data as monitoring persists across a conflict duration. Table 6: Metrics of agreement for comparison of CCD damage data with UNOSAT-reported damage location. # 4.5.8 Assessing exogenous drivers of coherence decreases To gauge whether exogenous processes drive coherence decreases unrelated to building damage, we look at coherence characteristics in a region in Israel without reports of extensive damage. We use two metrics for comparison of coherence characteristics in this pseudo-stable region in Israel at each time step in monitoring. The first metric, a Hellinger distance between probability distributions of $\overline { { \gamma } } _ { t }$ and $\overline { { \gamma } } _ { p r e }$ within a pseudo-stable sampling region, serves as an indicator of the similarity of the two distributions. A Hellinger distance has values ranging from 0 to 1. An identical pair of distributions will have a Hellinger distance of 0 and probability distributions that are entirely dissimilar will have a Hellinger distance of 1. In addition to the Hellinger distance we look at the difference in average coherence between $\overline { { \gamma } } _ { t }$ and $\overline { { \gamma } } _ { p r e }$ at each time step over the pseudo-stable area in Israel. Together, these metrics attribute metadata at each time step in monitoring indicative of the similarity of coherence characteristics across the pseudo-stable area in Israel. For each time step in monitoring, we compare each pre-conflict and during-conflict distribution of coherence values using each of these two metrics and summarize those comparisons across all time steps in the results below. # 5 Author contributions C.S. conceived of the approach for LT-CCD and refined the methods following extensive discussions and trials conducted in close collaboration J.V.D.H. C.S. engineered the data pipeline for scaling of InSAR processing to link ASF on-demand InSAR resources with cloud-based change detection using Google Earth Engine (GEE). Both C.S. and J.V.D.H. coordinated processing resources for uplift requests at ASF and GEE. J.V.D.H developed a code-base to post-process detected damage, aggregate damage detections by administrative units, and test for persistence of damage retrieval. C.S. authored the original manuscript and original figure artwork. J.V.D.H provided editorial review and direct suggestions to both the manuscript text and figure artwork. Both authors discussed the interpretation of the results and contributed to revising and finalizing the manuscript text. # 6 Supplementary materials Table 7: Agreement metrics between CCD and UNOSAT locations in areas valid for CCD monitoring at each time step where UNOSAT analyzed VHR satellite optical imagery for visible damage. Table 8: Agreement metrics between CCD and all UNOSAT locations at each time step. Figure 13: (A) A region with false positives (31°24’17.93” N 34°22’07.77” E) in Google Earth basemap imagery acquired in April 2021. Note the open roofs and active construction of buildings. (B) The same region from Panel A imaged in May 2022. Note the completion of building rooftops. Between May 2022 and August 2023, solar panels were also added to many of these rooftops. These buildings register as false positives of damage during our counterfactual monitoring period because construction took place between counterfactual and pre-conflict baseline periods. (C) A region with false negatives (31°23’39.07” N 34°22’15.71” E) in Google Earth basemap imagery acquired in May 2022. Note buildings situated on managed agricultural lands with orchards, row crops, and greenhouses situated throughout the scene. (D) The same region from panel C imaged in October of 2023. Note changes visible in the landscape, such as the establishment of row crops, senescence of a vegetated crop canopy, and new greenhouse structural elements. # References [1] Anniversary of October 7th Attack - United States Department of State, October 2024. [Online; accessed 17. Oct. 2024]. [2] Alice Masquelier-Page. More than 55,000 Palestinians have been killed in the Israel-Hamas war, Gaza health officials say The Associated Press. Associated Press, June 2025. [3] Hi. Occupied Palestinian Territories - Israel: 12,000 bombs dropped on Gaza, one of the most intense bombing campaigns in modern war. Occupied Palestinian Territories - Israel: 12,000 bombs dropped on Gaza, one of the most intense bombing campaigns in modern war, December 2023. [4] Jamon Van Den Hoek. The City is the Medium and Satellite Imagery Are a Prism. In Urban Remote Sensing, pages 325–333. John Wiley & Sons Ltd., Chichester, England, UK, September 2021. [5] Valerie Sticher, Jan D Wegner, and Birke Pfeifle. Toward the remote monitoring of armed conflicts. PNAS nexus, 2(6):pgad181, 2023. [6] Corey Scher and Jamon Van Den Hoek. Nationwide conflict damage mapping with interferometric synthetic aperture radar: A study of the 2022 russia–ukraine conflict. Science of Remote Sensing, 11:100217, 2025. [7] International Working Group on Satellite-based Emergency Mapping (IWG-SEM). Building damage assessment chapter, 2018. Version 1.0. [8] M. M. Bennett, J. Van Den Hoek, B. Zhao, and A. V. Prishchepov. Improving Satellite Monitoring of Armed Conflicts. Earth’s Future, 10(9):e2022EF002904, September 2022. [9] Simon Plank. Rapid damage assessment by means of multi-temporal SAR—A comprehensive review and outlook to Sentinel-1. Remote Sensing, 6(6):4870–4906, 2014. [10] Pinglan Ge, Hideomi Gokon, and Kimiro Meguro. A review on synthetic aperture radar-based building damage assessment in disasters. Remote Sensing of Environment, 240:111693, 2020. [11] Qihao Huang, Guowang Jin, Xin Xiong, Hao Ye, and Yuzhi Xie. Monitoring urban change in conflict from the perspective of optical and sar satellites: The case of mariupol, a city in the conflict between rus and ukr. Remote Sensing, 15(12):3096, 2023. [12] Ali Darvishi Boloorani, Mehdi Darvishi, Qihao Weng, and Xiangtong Liu. Post-war urban damage mapping using insar: the case of mosul city in iraq. ISPRS International Journal of Geo-Information, 10(3):140, 2021. [13] Alessandro Ferretti, Claudio Prati, and Fabio Rocca. Permanent scatterers in sar interferometry. IEEE Transactions on geoscience and remote sensing, 39(1):8–20, 2002. [14] Andrew K Gabriel, Richard M Goldstein, and Howard A Zebker. Mapping small elevation changes over large areas: Differential radar interferometry. Journal of Geophysical Research: Solid Earth, 94(B7):9183–9191, 1989. [15] Oliver L Stephenson, Tobias K¨ohne, Eric Zhan, Brent E Cahill, Sang-Ho Yun, Zachary E Ross, and Mark Simons. Deep learning-based damage mapping with insar coherence time series. IEEE Transactions on Geoscience and Remote Sensing, 60:1–17, 2021. [16] Yanchen Yang, Chou Xie, Bangsen Tian, Yihong Guo, Yu Zhu, Ying Yang, Haoran Fang, Shuaichen Bian, and Ming Zhang. Large-scale building damage assessment based on recurrent neural networks using SAR coherence time series: A case study of 2023 Turkey–Syria earthquake. Earthquake Spectra, 40(4):2285–2305, July 2024. [17] The Power of Volunteers: Remote Mapping in Gaza and Other Conflict Areas, August 2024. [Online; accessed 30. Sep. 2024]. [18] Max Tani. Satellite companies are restricting Gaza images, November 2023. [Online; accessed 15. May 2025]. [19] Iran Update, November 24, 2023, February 2023. [Online; accessed 13. Nov. 2024]. [20] Iran Update, December 1, 2023, 2023. [Online; accessed 13. Nov. 2024]. [21] Clionadh Raleigh, Rew Linke, Ha˚vard Hegre, and Joakim Karlsen. Introducing acled: An armed conflict location and event dataset. Journal of peace research, 47(5):651–660, 2010. [22] Arafat Barbakh and Mohammed Salem. Israel presses ground offensive in southern Gaza, air strikes intensify. Reuters, December 2023. [23] Barak Ravid. Israel’s operation in Gaza’s Khan Younis expected to last 3 to 4 weeks more. Axios, December 2023. [24] Matthew Mpoke Bigg and Ameera Harouda. Israel Says Its Military Has Encircled Khan Younis in Gaza. N.Y. Times, January 2024. [25] Joseph P Tustin. Why dresden was bombed. a review of the reasons and reactions.(unclassified). Web.¡ http://www. afhso. af. mil/shared/media/document/AFD-130523- 051. pdf, 1954. [26] Robert A. Pape. Bombing to Win: Air Power and Coercion in War. Cornell University Press, Ithaca, NY, USA, 1996. [27] Ian Bott and John Paul Rathbone. Military briefing: the Israeli bombs raining on Gaza. Financial Times, December 2023. [28] Mk 82 Aircraft Bomb - GICHD, November 2024. [Online; accessed 15. Nov. 2024]. [29] Natasha Bertrand and Katie Bo Lillis. Exclusive: Nearly half of the Israeli munitions dropped on Gaza are imprecise ‘dumb bombs,’ US intelligence assessment finds. CNN, December 2023. [30] Tamara Qiblawi. ‘Not seen since Vietnam’: Israel dropped hundreds of 2,000-pound bombs on Gaza, analysis shows. CNN, December 2023. [31] Ellen M. Rathje, Jeff Bachhuber, Ranon Dulberg, Brady R. Cox, Albert Kottke, Clinton Wood, Russell A. Green, Scott Olson, Donald Wells, and Glenn Rix. Damage Patterns in Port-au-Prince during the 2010 Haiti Earthquake. Earthquake Spectra, 27:117–136, October 2011. [32] Tuan Ngo, Priyan Mendis, A. Gupta, and J. Ramsay. Blast Loading and Blast Effects on Structures – An Overview. Electron. J. Struct. Eng., (1):76–91, January 2007. [33] Timothy Martin O’Donnell and Timothy Martin O’Donnell. Quantitative Validation of NASA ARIA Damage Proxy Maps, 2024. [Online; accessed 16. Apr. 2025]. [34] Andrea Monti-Guarnieri, Marco Manzoni, Davide Giudici, Andrea Recchia, and Stefano Tebaldini. Vegetated target decorrelation in sar and interferometry: Models, simulation, and performance evaluation. Remote Sensing, 12(16), 2020. [35] Ryo Natsuaki, Hiroto Nagai, Naoya Tomii, and Takeo Tadono. Sensitivity and limitation in damage detection for individual buildings using insar coherence—a case study in 2016 kumamoto earthquakes. Remote Sensing, 10(2):245, 2018. [36] Jos´e Manuel Delgado Blasco, Magdalena Fitrzyk, Jolanda Patruno, Antonio Miguel RuizArmenteros, and Mattia Marconcini. Effects on the Double Bounce Detection in Urban Areas Based on SAR Polarimetric Characteristics. Remote Sensing, 12(7):1187, January 2020. [37] Erwan Rivault & Daniele Palumbo Dominic Bailey. Nearly 100,000 Gaza buildings may be damaged, satellite images show. BBC News, December 2023. [38] Josef Kellndorfer, Oliver Cartus, Marco Lavalle, Christophe Magnard, Pietro Milillo, Shadi Oveisgharan, Batu Osmanoglu, Paul A Rosen, and Urs Wegmu¨ller. Global seasonal sentinel-1 interferometric coherence and backscatter data set. Scientific Data, 9(1):73, 2022. [39] Israel, Blockade of Gaza and the Flotilla Incident | How does law protect in war? - Online casebook, October 2024. [Online; accessed 17. Oct. 2024]. [40] World Bank Open Data. World Bank Open Data, September 2024. [Online; accessed 29. Sep. 2024]. [41] Martino Pesaresi and Panagiotis Politis. Ghs-built-s r2023a - ghs built-up surface grid, derived from sentinel2 composite and landsat, multitemporal (1975-2030), 2023. [42] United Nations Conference on Trade and Development. The global commodities forum: 2015 report, 2015. Accessed: 2024-10-17. [43] Daniel Runfola, Austin Anderson, Heather Baier, Matt Crittenden, Elizabeth Dowker, Sydney Fuhrig, Seth Goodman, Grace Grimsley, Rachel Layko, Graham Melville, Maddy Mulder, Rachel Oberman, Joshua Panganiban, Andrew Peck, Leigh Seitz, Sylvia Shea, Hannah Slevin, Rebecca Youngerman, and Lauren Hobbs. geoBoundaries: A global database of political administrative boundaries. PLoS One, 15(4):e0231866, April 2020. [44] World Bank. Historical trends and variability in the west bank and gaza. https://climateknowledgeportal.worldbank.org/country/west-bank-and-gaza/ trends-variability-historical, 2023. Accessed: 2024-09-29. [45] Hassan Al-Najjar, Anton Purnama, Korhan O¨ zkan, and Mazen Abualtayef. Analysis of extreme rainfall trend and mapping of the Wadi pluvial flood in the Gaza coastal plain of Palestine. Acta Geophys., 70(5):2135–2147, October 2022. [46] Maurizio Santoro, Urs Wegmuller, and Jan IH Askne. Signatures of ers–envisat interferometric sar coherence and phase of short vegetation: An analysis in the case of maize fields. IEEE Transactions on Geoscience and Remote Sensing, 48(4):1702–1713, 2009. [47] Chayma Chaabani, Marco Chini, Riadh Abdelfattah, Renaud Hostache, and Karem Chokmani. Flood mapping in a complex environment using bistatic tandem-x/terrasar-x insar coherence. Remote sensing, 10(12):1873, 2018. [48] K Schepanski, TJ Wright, and P Knippertz. Evidence for flash floods over deserts from loss of coherence in insar imagery. Journal of Geophysical Research: Atmospheres, 117(D20), 2012. [49] Leanne Abraham, Bora Erden, Nader Ibrahim, Elena Shao, and Haley Willis. Israel’s Controlled Demolitions Are Razing Neighborhoods in Gaza. N.Y. Times, July 2024. [50] Andrew Sorensen and William L. McGill. What to look for in the aftermath of an explosion? A review of blast scene damage observables. Eng. Fail. Anal., 18(3):836–845, April 2011. [51] Mk 82 Aircraft Bomb - GICHD, September 2024. [Online; accessed 29. Sep. 2024]. [52] P. A. Shirbhate and M. D. Goel. A Critical Review of Blast Wave Parameters and Approaches for Blast Load Mitigation. Arch. Comput. Methods Eng., 28(3):1713–1730, May 2021. [53] Robin Stein, Haley Willis, Ishaan Jhaveri, Danielle Miller, Aaron Byrd, and Natalie Reneau. A Times Investigation Tracked Israel’s Use of One of Its Most Destructive Bombs in South Gaza. N.Y. Times, December 2023. [54] Howard A Zebker, John Villasenor, et al. Decorrelation in interferometric radar echoes. IEEE Transactions on geoscience and remote sensing, 30(5):950–959, 1992. [55] Bingyuan Hao, Chao Ma, Guifang Zhang, and Lixun Kang. Analyzing decorrelation of multitemporal sar data on insar. In 2008 Congress on Image and Signal Processing, volume 1, pages 452–461. IEEE, 2008. [56] Andrew Hooper. A multi-temporal insar method incorporating both persistent scatterer and small baseline approaches. Geophysical research letters, 35(16), 2008. [57] Andrew Hooper, David Bekaert, Karsten Spaans, and Mahmut Arıkan. Recent advances in sar interferometry time series analysis for measuring crustal deformation. Tectonophysics, 514:1–13, 2012. [58] Piyush Shanker, Francesco Casu, Howard A. Zebker, and Riccardo Lanari. Comparison of Persistent Scatterers and Small Baseline Time-Series InSAR Results: A Case Study of the San Francisco Bay Area. IEEE Geosci. Remote Sens. Lett., 8(4):592–596, January 2011. [59] Shinki Cho, Haoyi Xiu, and Masashi Matsuoka. Backscattering characteristics of sar images in damaged buildings due to the 2016 kumamoto earthquake. Remote Sensing, 15(8):2181, 2023. [60] Pinglan Ge, Hideomi Gokon, and Kimiro Meguro. A review on synthetic aperture radar-based building damage assessment in disasters. Remote Sens. Environ., 240:111693, April 2020. [61] European Space Agency (ESA). Sentinel-1: The SAR Imaging Constellation for Land and Ocean Services. https://sentinel.esa.int/web/sentinel/missions/sentinel-1, 2024. [62] billhauer2. Thruster anomaly affects Sentinel-1 orbit control. Alaska Satellite Facility, April 2024. [63] European Space Agency. Increase of sentinel-1a orbital tube, 2024. Technical Note, ESA EOPG-EOPGMQ-TN-2024-12. [64] William Grey and Adrian Luckman. Mapping Urban Extent Using Satellite Radar Interferometry. Photogramm. Eng. Remote Sens., 69(9):957–961, September 2003. [65] Product guide - HyP3. [66] GAMMA Remote Sensing AG - Home, September 2024. [Online; accessed 30. Sep. 2024]. [67] Humanitarian openstreetmap team. https://www.hotosm.org/, 2023. Accessed: 2024-09-30. [68] Yanbing Bai, Bruno Adriano, Erick Mas, and Shunichi Koshimura. Building Damage Assessment in the 2015 Gorkha, Nepal, Earthquake Using Only Post-Event Dual Polarization Synthetic Aperture Radar Imagery. Earthquake Spectra, 33(1 suppl):185–195, December 2017. [69] Michele Crosetto, Oriol Monserrat, Marı´a Cuevas-Gonz´alez, Nu´ria Devanthe´ry, and Bruno Crippa. Persistent scatterer interferometry: A review. ISPRS Journal of Photogrammetry and Remote Sensing, 115:78–89, 2016. [70] Andrea Virgilio Monti-Guarnieri, Maria Antonia Brovelli, Marco Manzoni, Mauro Mariotti D’Alessandro, Monia Elisa Molinari, and Daniele Oxoli. Coherent Change Detection for Multipass SAR. IEEE Trans. Geosci. Remote Sens., 56(11):6811–6822, July 2018. [71] Jungkyo Jung, Duk-jin Kim, Marco Lavalle, and Sang-Ho Yun. Coherent change detection using insar temporal decorrelation model: A case study for volcanic ash detection. IEEE Transactions on Geoscience and Remote Sensing, 54(10):5765–5775, 2016. [72] Ellen M. Rathje and Beverley J. Adams. The Role of Remote Sensing in Earthquake Science and Engineering: Opportunities and Challenges. Earthquake Spectra, 24(2):471–492, May 2008.
Aerial bombardment of the Gaza Strip beginning October 7, 2023 is one of the most intense bombing campaigns of the twenty-first century, driving widespread urban damage. Characterizing damage over a geographically dynamic and protracted armed conflict requires active monitoring. Synthetic aperture radar (SAR) has precedence for mapping disaster-induced damage with bi-temporal methods but applications to active monitoring during sustained crises are limited. Using interferometric SAR data from Sentinel-1, we apply a long temporal-arc coherent change detection (LT-CCD) approach to track weekly damage trends over the first year of the 2023- Israel-Hamas War. We detect 92.5% of damage labels in reference data from the United Nations with a negligible (1.2%) false positive rate. The temporal fidelity of our approach reveals rapidly increasing damage during the first three months of the war focused in northern Gaza, a notable pause in damage during a temporary ceasefire, and surges of new damage as conflict hot-spots shift from north to south. Three-fifths (191,263) of all buildings are damaged or destroyed by the end of the study. With massive need for timely data on damage in armed conflict zones, our low-cost and low-latency approach enables rapid uptake of damage information at humanitarian and journalistic organizations.
[ "cs.CV" ]
# 1. Introduction Cell profiling aims to create meaningful representations of cells, which can be utilized for validating compounds in drug discovery and understanding disease mechanisms [8]. Among various cell profiling methods, image-based profiling using microscope images is the most cost-effective approach for generating high-dimensional representations. Generating representations without human annotations has long been an important goal in computer vision, and recent advances in Self-Supervised Learning (SSL) have brought this closer. Although previous research has made significant progress in image-based profiling using computer vision techniques [5, 6, 26, 33, 34, 37], obtaining a generalizable feature extractor through self-supervised methods remains an open challenge. Although SSL has been highly successful with natural image processing, directly applying existing SSL methods to cell profiling is not suitable. The main difficulty arises from the distribution gap between fluorescently dyed microscope cell images and natural images. This gap introduces two challenges: 1) State-of-the-art SSL methods aim to learn representations that remain consistent across different views of objects, resulting in informative and generalizable representations. However, these methods depend on carefully designed image augmentations. Applying these augmentations directly to cell images is not appropriate and degrades performance due to dimensional collapse [21]. 2) Unlike natural images, where representations are generated from a 3-channel image, cell images require generating one representation from multiple images with several channels. These channels, which may include fluorescent and brightfield images, contain lower information density compared to natural images. Effectively extracting and merging information from these channels is therefore crucial. To address the first challenge, we propose augmentations specifically tailored for cell images. Given that the channels are dependent on fluorescent dyes, we introduce a channelaware color jitter augmentation. Additionally, we simulate imaging noise through microscope noise augmentation. To enhance the robustness of learned representations considering cell morphology and anisotropy, we include elastic transformations and random rotations. For the second challenge, to make our model adaptable to varying inputs, we use a single-cell image as input to the feature extractor. Then, we design a sequence of feature post-processing steps to merge outputs from multiple images within the same well, producing a final representation. Specifically, we first perform multi-granularity merging by concatenating the individual image representation with averaged dense representations. To merge representations from different sites within a well, we implement interpolation-based merging. To strengthen the causal relationship between representations and perturbation compounds, we apply a cross-plate concentration scheme. We empirically validate our method, named SSLProfiler, by pretraining and evaluating a ViT model on the cell image dataset [9]. We also examine the impact of key components of our proposed method. Our approach achieved first place in the Cell Line Transferability Challenge [4] at the CVDD workshop, CVPR 2025. # 2. Related Works # 2.1. Self-supervised Learning Initially, SSL aimed at solving pretext tasks designed manually by humans [1, 14, 16, 20, 23, 27, 28]. Later, contrastive learning methods [10, 12, 18, 29, 30] significantly outperformed supervised pretraining approaches on image-level tasks, making SSL a mainstream method for model pretraining. Recently, non-contrastive learning, which aims to achieve consistency across different views [3, 7, 10, 11, 17, 18], has attracted increasing interest due to its better ability to generalize. Non-contrastive methods have been successfully applied in various domains, including video representation learning [24, 25] and medical image processing [2]. Nevertheless, applying SSL to dense prediction tasks [39] or long-tailed scenarios [13] remains challenging. More importantly, how to effectively integrate SSL, especially noncontrastive approaches, into image-based cell profiling remains an open research question. # 2.2. Image-based Cell Profiling Image-based cell profiling has become a powerful method for measuring phenotypic differences across various cellular states. By utilizing high-throughput microscopy and advanced computational methods, this approach extracts detailed morphological features from cell images, providing valuable insights into cellular responses to different conditions. Earlier research in image-based profiling primarily focused on supervised or weakly supervised approaches [5, 8, 26, 36]. Recent developments have enhanced profiling by integrating contrastive SSL methods [33, 34, 37] to obtain richer and more informative representations. Nonetheless, exploring non-contrastive methods in cell profiling may further improves the performance and generalization. # 3. Method # 3.1. Task Definition This paper focuses on extracting meaningful representations from cell images. Specifically, we use images from the 2020 11 04 CPJUMP1 dataset provided in [9]. The dataset includes chemically and genetically perturbed cells from two cell lines: U2OS and A549. Each experiment is performed within a plate that contains multiple wells. Each well represents a distinct experimental condition with unique perturbations. Thus, we define the dataset as $\mathcal { D } =$ $\{ X _ { w } ^ { i } \} _ { i = 1 } ^ { N }$ , where $N$ is the total number of wells. Within each well, images are captured from either 9 or 16 distinct positions, represented as $X _ { w } ^ { i } = \{ \pmb { x } _ { j } \} _ { j = 1 } ^ { p _ { i } }$ , where $p _ { i }$ is the number of positions in well $i$ . Images from each position have 8 channels, consisting of five fluorescent and three brightfield channels. The goal is to train a robust feature extractor $f$ for each well $X _ { w }$ , enabling the learned representation to capture both cellular phenotypic features and causal effects of the applied perturbations. Although SSL methods have been successful with natural images, applying these methods to cell images is challenging for two main reasons: First, cell images provide multiple types of information compared to natural images. For instance, in natural image datasets such as ImageNet [22], representations are generated based on a single image. In contrast, cell images require integration of data from various positions and multiple channels. Second, a significant distribution gap exists between cell images and natural images. Effective SSL methods heavily rely on data augmentation strategies, such as random cropping and color jittering [10], which must be carefully modified for cell images due to the distinct properties of their channels. # 3.2. Data Preprocessing The original dataset consists of 16-bit TIFF images. To reduce disk space usage and accelerate data loading during training, we first convert images to 8-bit format, aligning them with natural image standards, using: $$ I ( x , y ) = \left\lfloor { \frac { I ( x , y ) - \operatorname* { m i n } ( I ) } { \operatorname* { m a x } ( I ) - \operatorname* { m i n } ( I ) } } \times 2 5 5 \right\rfloor . $$ Next, we compute the mean and variance for each channel, denoted by $\pmb { \mu }$ and $\sigma$ . During both training and inference, the input images are normalized as follows: $$ \pmb { x } _ { p } = \frac { \pmb { x } _ { p } - \pmb { \mu } } { \pmb { \sigma } } . $$ # 3.3. Self-supervised Model Pretraining # 3.3.1. Model Framework We adopt a classical non-contrastive SSL framework based on a Siamese network, which comprises a student model $f _ { s }$ and a teacher model $f _ { t }$ . Consistent with standard selfdistillation methods [7, 31, 38], the student model learns by distilling knowledge from the teacher model, while the teacher model is updated using an Exponential Moving Average (EMA) of the student model parameters. A Vision Transformer (ViT) [15] is used as the backbone architecture. During pretraining, each position $\scriptstyle { \mathbf { { \vec { x } } } } _ { p }$ of an image is treated as a single input unit. Unlike natural images, cell images consist of 8 channels rather than three. Due to significant distribution differences between fluorescent and brightfield channels, and because some datasets only include 5 channels, we separately train two models for these channel types. Their features are then combined during testing. Empirical results indicate that this approach performs better than using all 8 channels simultaneously. For loss computation, we directly employ the DINO v2 framework without modification. It includes three main components: the $\mathcal { L } _ { D I N O }$ loss for instance-level information distillation [7], the $\mathcal { L } _ { i B O T }$ loss for patch-level information reconstruction [38], and the $\mathcal { L } _ { K o L e o }$ loss for regularization [32]. The overall loss function is: $$ \mathcal { L } _ { t o t a l } = \mathcal { L } _ { D I N O } + \lambda _ { 1 } \cdot \mathcal { L } _ { i B O T } + \lambda _ { 2 } \cdot \mathcal { L } _ { K o L e o } , $$ where $\lambda _ { 1 }$ and $\lambda _ { 2 }$ are hyperparameters. Since the loss functions remain unchanged during training, further details are omitted due to space limitations. Readers may refer to the original DINO v2 paper [31] for more details. # 3.3.2. Data Augmentation The effectiveness of cross-view consistency SSL heavily depends on view generation, specifically data augmentation. Proper data augmentation promotes clustering of similar examples [19], whereas excessively strong augmentations can lead to dimensional collapse [21]. Thus, carefully designing augmentations for cell images is critical. Adapted Color Jitter. Considering channels in fluorescent images are more independent than in RGB images, we propose a channel-aware color jitter augmentation that independently adjusts brightness and contrast per channel. Given an input image $I \in \mathbf { \bar { \mathbb { R } } } ^ { H \times W \times C }$ , the augmented image $I ^ { \prime }$ is computed as follows. Let $I _ { c }$ represent the $c$ -th channel of the input image. For each channel $c \in \{ 1 , 2 , \ldots , C \}$ , brightness factors $\beta _ { c }$ and contrast factors $\gamma _ { c }$ are independently drawn from uniform distributions: $$ \begin{array} { r } { \beta _ { c } \sim \mathcal { U } ( \operatorname* { m a x } ( 0 , 1 - \alpha _ { b } ) , 1 + \alpha _ { b } ) , } \\ { \gamma _ { c } \sim \mathcal { U } ( \operatorname* { m a x } ( 0 , 1 - \alpha _ { c } ) , 1 + \alpha _ { c } ) , } \end{array} $$ where $\alpha _ { b }$ and $\alpha _ { c }$ control the maximum brightness and contrast adjustments, respectively. The augmented image channel $I _ { c } ^ { \prime }$ is first adjusted for brightness: $$ I _ { c } ^ { ( b ) } = I _ { c } \cdot \beta _ { c } , $$ and subsequently adjusted for contrast relative to its mean $\mu _ { c }$ : $$ \begin{array} { l } { \displaystyle \mu _ { c } = \frac { 1 } { H W } \sum _ { x = 1 } ^ { H } \sum _ { y = 1 } ^ { W } I _ { c } ( x , y ) , } \\ { \displaystyle I _ { c } ^ { \prime } = ( \pmb { x } _ { c } ^ { ( b ) } - \mu _ { c } ) \cdot \gamma _ { c } + \mu _ { c } . } \end{array} $$ Additional Augmentations for Cell Images. To simulate realistic microscopy imaging noise, we apply a microscope noise augmentation, which includes shot noise, dark current, and read noise. These noises are simulated using Poisson and Gaussian distributions. To address cell morphological variability and anisotropy, we also use elastic transformations and random rotations as additional augmentations. A detailed description is presented in the Appendix 6 due to space limitations. # 3.4. Feature Post Processing So far, we have obtained the feature extractors. However, these extractors produce a representation for each position within a well. In this subsection, we investigate methods to combine these embeddings effectively to obtain strong representations for each well. # 3.4.1. Multi-Granularity Information Merging Since we use ViT models as the backbone, they naturally provide representations at different levels: the image-level ([CLS] token) and the dense-level (patch tokens). To effectively use these multi-level features, we concatenate the [CLS] token with the average of the patch tokens, forming the final output for position $p$ : $$ \begin{array} { r } { z _ { p } = \mathrm { c o n c a t } ( z _ { p } ^ { \mathrm { C L S } } , z _ { p } ^ { \mathrm { P a t c h } } ) . } \end{array} $$ # 3.4.2. Multi-Position Information Merging Next, we combine representations from multiple positions into a single well-level representation. In the 2020 11 04 CPJUMP1 dataset, images are captured at either 9 or 16 positions. Typically, these representations can be merged through averaging or concatenation. However, direct concatenation is not suitable for cell profiling tasks because the representations from different wells would have varying dimensions, complicating downstream tasks. To solve this issue, we assume that positions are sampled in a symmetrical pattern. Based on this assumption, wells with 16 images are first reshaped into a $4 \times 4$ matrix, then interpolated into a $3 \times 3$ matrix. After this adjustment, all wells contain the same number of representations (9 representations), allowing for effective concatenation. Practically, we observe that concatenation typically provides better results compared to averaging. # 3.4.3. Cross-Plate Representation Alignment To further enhance the consistency of learned representations across different experiments, we apply a simple yet effective method. Specifically, we observed in our dataset that perturbations are linked to specific well positions, meaning that the same well positions across different plates are affected by identical compounds. Thus, aligning representations from the same well positions improves the causal relationship between learned representations and compoundinduced perturbations. Considering representations from each well position as clusters, we first calculate the cluster centers: $$ \mu _ { w } = \frac { 1 } { N _ { p } } \sum _ { j = 1 } ^ { N _ { p } } z _ { j } ^ { w } , $$ where $N _ { p }$ is the number of plates, and $\boldsymbol { z } _ { j } ^ { w }$ denotes the well representation at position $w$ on plate $j$ . Then, we shift each well representation toward its cluster center as follows: $$ z _ { j } ^ { w } = \alpha \cdot z _ { j } ^ { w } + ( 1 - \alpha ) \cdot \pmb { \mu _ { w } } , $$ where $\alpha$ is a hyperparameter controlling the degree of alignment. We find that this strategy enhances the causal relationship between cell representations and compounds, particularly excelling in the CVDD challenge [4]. However, this may slightly decrease the generalization ability by reducing the phenotypic information captured in cell representations. # 4. Experiments # 4.1. Experiment Setup Model Pretraining. We pretrained a ViT-Small-14 model from scratch for 100 epochs on the 2020 11 04 CPJUMP1 batch from [9]. For the feed-forward network, we employed SwiGLU [35]. We set the batch size to 128 and used a learning rate of $2 \cdot 1 0 ^ { - 5 }$ , with a warmup period of 10 epochs. We trained two separate models for fluorescent and brightfield channels, respectively, and combined their outputs using the post-processing method described in Sec 3.4. Evaluation Protocol. We adopted the evaluation setup from the Cell Line Transferability challenge at CVPR 2025 [4], which uses a $k$ -NN classifier as the downstream task. Specifically, we trained a $k$ -NN classifier on the learned representations of wells to associate these representations with compound perturbations. We applied $K$ -fold crossvalidation to split the dataset into training and evaluation subsets. To assess the robustness of the representations, the evaluation included tests both within and across different cell lines. Further details can be found in [4]. # 4.2. Main Results We present an analysis of the key components in Tab. 1. In this table, ‘Local’ refers to our reimplemented evaluation, while ‘leader board’ indicates the official results from the challenge. The baseline submission refers to example representations provided by the challenge organizers. Below, we detail each component included in our analysis: Table 1. Ablation study of the key differences between DINO v2 and SSLProfiler. • 8 channels input indicates the adaptation of the DINO v2 framework to the cell image dataset, with only the number of channels changed. • Patch representations refers to the methods in Sec 3.4.1. • Separate training means we trained two separate models for fluorescent and brightfield channels. This approach improved both training stability and model performance. • Training res 518 means we increased the training resolution of the global view to $5 1 8 \times 5 1 8$ . This significantly improved the results since higher resolution images are necessary to capture detailed information about small cells. • Adapted augmentations refers to the augmentation techniques described in Sec 3.3.2, confirming the effectiveness of our approach. • ViT-Base-14 and 400 epochs means we increased the backbone model size and training duration, achieving better performance. • Cross place alignment refers to the post-processing method described in Sec. 3.4.2. While this method significantly improved results in the challenge, it may lead to overfitting and reduced generalization. Therefore, we recommend using it only for this specific challenge.
Image-based cell profiling aims to create informative representations of cell images. This technique is critical in drug discovery and has greatly advanced with recent improvements in computer vision. Inspired by recent developments in non-contrastive Self-Supervised Learning (SSL), this paper provides an initial exploration into training a generalizable feature extractor for cell images using such methods. However, there are two major challenges: 1) There is a large difference between the distributions of cell images and natural images, causing the view-generation process in existing SSL methods to fail; and 2) Unlike typical scenarios where each representation is based on a single image, cell profiling often involves multiple input images, making it difficult to effectively combine all available information. To overcome these challenges, we propose SSLProfiler, a non-contrastive SSL framework specifically designed for cell profiling. We introduce specialized data augmentation and representation post-processing methods tailored to cell images, which effectively address the issues mentioned above and result in a robust feature extractor. With these improvements, SSLProfiler won the Cell Line Transferability challenge at CVPR 2025.
[ "cs.CV" ]
# 1 Introduction With the rapid development of generative models in recent years [1, 2, 3, 4], image composition has received increasing attention owing to its capacity for controlled generation [5, 6, 7]. However, since the implanted foreground and the new background originate from different sources, this discrepancy can easily lead to an unrealistic perception of the composite image. In this paper, we focus on a challenging task named universal image relighting, which aims to seamlessly composite a subject into a new background while maintaining realism and aesthetic uniformity in terms of lighting and color tone. The background can either be specified by provided natural images (image-based relighting), or be created from unlimited text prompts (text-based relighting). This task ensures that users and digital elements coexist naturally within any environment and has wide application in virtual reality and intelligent editing for the film and advertising industries [8, 9]. Figure 1: Our DreamLight is a unified image relighting model capable of performing relighting guided by arbitrary text or natural background images without any other prior such as HDR maps. The top left of each set of figures is the original picture of the foreground, the bottom left is the given condition, and the right is the generated result. Early researches about relighting tend to focus on the image condition, e.g., image harmonization and portrait relighting, while with scant exploration into text-based scenarios. Portrait relighting methods [10, 11, 12, 13] mainly take the idea of physics-guided design that explicitly models the image intrinsics and formation physics. They decompose the input images into several components and leverage a phased pipeline to incrementally learn image components such as surface normals and albedo maps. Some harmonization methods [14, 15] also take similar disentanglement idea. Despite the promising results achieved by these approaches, obtaining training data with pairs of high-quality relighting images and their corresponding intrinsic attributes from light stage is expensive and difficult. Most harmonization methods [16, 17, 18, 19] take this task as an image-to-image translation paradigm and propose to perform pixel-level transformation based on autoencoder architecture. However, they lack object semantic guidance and tend to overlook distinct variations in illumination. Recently, some works [20, 8, 21] exploit the powerful semantic modeling ability of generative diffusion models [1] to enhance relighting effect, but their light source relies on the given environment maps and such maps are not always feasible to acquire in real world. IC-Light [22] is proposed to address both image-based and text-based relighting for natural images without introducing other signal guidance. It imposes light alignment training strategy to enhance the light consistence. However, IC-Light takes two separate models for different conditions and does not tailor the structure. Simply concatenating foreground, background, and noise limits the model’s understanding of foreground-background interactions, leading to severe color bleeding and foreground distortions in some scenarios. To address the above challenges, we present a model named DreamLight that can perform both image-based and text-based relighting. In DreamLight, to generate natural interaction effects of light and color tone between the foreground and the background, we propose a Position-Guided Light Adapter (PGLA), which condenses light information from different directions in the background into several groups of light query embeddings and selectively modulates the foreground area with direction-biased masked attention. In addition, we present an effective Spectral Foreground Fixer (SFF) as a post-processing module to enhance the consistency of foreground appearance and avoid subject distortion. Based on the wavelet transform, SFF is trained to learn dynamic calibration coefficients for high-frequency textures of input foreground and relighted low-frequency light. By adaptively reorganize these information, SFF can output stunningly consistent foreground. Besides, to facilitate the training of our model, we develop different kinds of data generation processes, e.g., 3D rendering and training relighting lora, to produce diverse training samples. We have performed extensive quantitative and qualitative comparisons. Experiment results demonstrate that our DreamLight exhibits superior generalization and performance on the universal relighting of natural images. Related ablations also prove the rationality and effectiveness of the proposed designs. Furthermore, we observe that thanks to the unified learning of text and image conditions in a single model, our DreamLight can generate results with the guidance of both conditions. Our contributions can be summarized as follows: • We propose a model named DreamLight for universal image relighting, which can seamlessly composite a subject into a new background with either image or text conditions. We also develop high-quality data generation process to benefit the training of our model. • We introduce a Position-Guided Light Adapter (PGLA), which is designed to enable foreground elements at different positions to interact with background light from various directions in a tendentious manner for generating more natural lighting effects. • We propose a Spectral Foreground Fixer (SFF) module to adaptively reorganize different frequency components of the input and relighted subjects, which helps enhance the consistency of foreground textures. # 2 Related Work Image-based Relighting: There are two principal sets of related image-based relighting methods. The first is portrait relighting approaches [11, 10, 12, 23, 24, 25, 26, 8, 13, 27]. They mainly take the idea of physics-guided model design that typically involve the intermediate prediction of surface normals, albedo, and a set of diffuse and specular maps with ground truth supervision. To achieve that, some methods rely on the paired training data acquired with the light stage system [28] and a target HDR environment map as the external light source. However, the dependence on light stage data and HDR maps incurs substantial data collection cost and significantly limits their implementation in real-world situations, where obtaining HDR maps may not always be feasible. Some methods [29, 27] employ multi-stage frameworks. As a result, the accuracy and performance of these systems hinge on the precision of each individual stage. This makes the entire process complex and susceptible to errors that could propagate throughout these intermediate steps. The second is image harmonization methods that aim to match the color statistics of the foreground object with those of the background for natural composition [14, 16, 18, 19, 30, 17, 15, 31, 32, 33, 34]. These methods tend to take this task as an end-to-end image-to-image translation paradigm, where the network is trained to predict a harmonized image from the input composite. Some works [35, 36] collect pixel-aligned paired data by color transfer or altering foreground color in real images with pre-designed or learned augmentations. While these methods have achieved decent harmonization effects, they struggle to generate realistic and natural light interaction effects between the foreground and the background. Recently, some works [20, 8, 21] exploit generative models [1] to enhance relighting effect, but most of them still rely on the given environment maps for lighting guidance, which limits the potential application scenarios. IC-Light [22] proposes to perform pure natural image relighting by imposing light alignment training strategy and achieves excellent performance for scenarios with strong lighting. However, its simple network design limits the model’s performance and is prone to generating severe color bleeding. To alleviate above issues, we propose DreamLight designed for natural images without any additional prior source. A position-guided light adapter is introduced to boost the reasonable interaction between subjects and backgrounds, thus contributing to generating natural and harmonious lighting effect. Text-based Relighting: Currently, there is relatively less research focused on text-based relighting. Although Lasagna [37] takes text as input, the text serves to indicate the light direction rather than to describe the target background. The pioneer IC-Light [22] utilizes two separate models to handle image-based and text-based relighting. Actually, some text-guided inpainting methods [38, 39] can achieve similar effect. However, they tend to preserve the original lighting and color of the subject, which may result in unnatural results. Conversely, our DreamLight achieves powerful relighting performance of natural images using a single model for both image-based and text-based situations. # 3 Method # 3.1 Overview Figure 2 illustrates the pipeline of our DreamLight. It takes a triplet of foreground image, background image, and text prompt as input. For image-based relighting, the text prompt is set to “blend these two images". For text-based relighting, the background image is designed as an all-black image. The green and brown lines show the specific processes of image-based and text-based relighting, respectively. In detail, we first utilize a pretrained segmentation model to extract the subject region and remove the original background from the input image. Following [40, 22], the masked subject and background are encoded by VAE and then concatenated with the random noise to serve as the input for diffusion UNet. For image-based relighting, a Position-Guided Light Adapter (PGLA) is proposed to selectively inject background light information from different directions into the foreground with direction-biased masked attention. Finally, a Spectral Foreground Fixer (SFF) is utilized to enhance the consistency of the foreground. Figure 2: Pipeline of our DreamLight. It takes the foreground image, background image, and text prompt as input. The masked foreground and background images are encoded by pretrained VAE and concatenated with the random noise to serve as the input of UNet. The Position-Guided Light Adapter is proposed to selectively inject background light from different directions into the foreground at different locations for more natural relighting results. The Spectral Foreground Fixer is utilized to enhance the consistency of the subject. # 3.2 Position-Guided Light Adapter Although simply concatenating the background with random noise can convey some information about environment lighting, it imposes pixel-aligned light prior and overlooks the natural interaction between the subject and light from various directions in the background, thus resulting in undesirable relighting results in some scenes. To alleviate this problem, we propose the Position-Guided Light Adapter (PGLA), which enhances the foreground’s response to light sources from different directions in background while reducing potential unreasonable light alignment. This is achieved by additional encoding and organization of the background light information. Specifically, the overall pipeline of our PGLA is based on the process of IP-Adapter [41]. We first utilize a CLIP image encoder to encode the target background image into a feature map $f _ { b } \in$ $\mathbb { R } ^ { H \times W \times C }$ , where $C$ indicate the embedding dimension. $H , W$ denote the spatial size of feature map. To mitigate the potential noise effects of background textures on light information extraction, we perform a low-frequency enhancement operation on the encoded background features, which helps emphasize the high-level lighting and color tones information. The process can be formulated as: $$ \begin{array} { r } { \begin{array} { c } { g = G a u s s i a n ( H , W , \sigma ) , } \\ { f _ { b l } = F F T ( f _ { b } ) * g , } \\ { f _ { b l } = I F F T ( R e L U ( C o n v ( f _ { b l } ) ) ) + f _ { b } , } \end{array} } \end{array} $$ where $F F T$ and $I F F T$ are Fourier transform and Fourier inverse transform. $g$ denotes the filtering coefficient map. $\sigma$ is cutoff frequency and $*$ means element-wise product. Conv and $R e L U$ operations are utilized to update the features in the spectral domain for efficient global interaction. Then, we restructure the background features and encode light information from different directions into predefined light queries $f _ { Q }$ , which are randomly initialized learnable embeddings. Considering that the condition is vanilla 2D natural background images, we assume that the light sources can be split into four basic directions: left, right, top, and down. Thus, we leverage four sets of query embeddings to selectively extract light information from background features. To achieve that, we propose a directionbiased masked attention. As shown in Figure 3, we use cross attention mechanism to condense the information of background into the light query. In detail, we initialize four sets of light query and design them to separately perceive the background light sources of the four directions, i.e., left, right, top, and down. Taking the query f lQe corresponding to the left light as an example, we generate a coefficient map that decays from left to right with the same size as the background feature, as shown by the “left decay map" in Figure 3. It is then flattened and multiplied with the initial attention weight of corresponding regions to modulate the cross attention, making the query f lQeft focus more on the information on the left side of the background feature. Similarly for the light query responsible for the other directions. Besides, we concatenate the light query and background features as Key and Value to ensure the interaction between different group of queries, which contributes to the overall harmonization. Having these condensed light queries, the next step is to inject their informa Figure 3: Process of direction-biased masked attention. The upper figure is the design of the attention mask in cross attention. Here the darker color indicates the tendency toward 0. To allow different groups of query to perceive different directions of light, we generate coefficient maps with different attenuation directions and multiply them to the original attention weight. For ease of understanding, we note the transformation of the left decay map in the figure. Besides, the attention weight between different queries has not been modified for overall light information harmonization. tion into the foreground area of the latent features in UNet with similar masked attention. That is, we adjust the attention weight of condition cross attention so that different regions of the foreground object have a stronger response to nearby light sources. In addition to the exchange of Q and KV, we additionally add a mask to the background area to ensure that the background is not changed. Furthermore, as shown in Figure 2, we only inject these light prior in the middle and up block of the UNet. This is due to the fact that feature changes in down blocks may have an impact on the overall semantics [42]. We only need to make changes to the object’s light based on its overall representation, which helps to avoid the potential distortion problems caused by extra information injection. With such designs, the combined subject elements can perform selective interaction with the background to produce more natural relighting results. Related experiments in Section 4.3 also justify the rationality of our designs. # 3.3 Spectral Foreground Fixer Diffusion-based methods tend to face the problem of foreground distortion, especially in small and detailed areas such as face and text. On the one hand, the latent space of large-scale pre-trained models tends to embellish the foreground, which can lead to inconsistency and ID variation with the input. On the other hand, the encoding process of VAE may cause information loss in small regions with high information density, leading to difficulties in maintaining the texture of the subject. Therefore, we propose a Spectral Foreground Fixer (SFF) to address this challenge. This module is based on the assumption that the high-frequency components of an image correspond to the pixels varying drastically, such as object boundaries and textures, while the low-frequency components correspond to the general semantic information such as the color and light. As shown in Figure 4, we utilize Wavelet Transform to extract the high-frequency and low-frequency components of the input foreground image and the initial predicted results. It can be seen that the high-frequency part maintains the details and textures, while low-frequency part indicate rough colors and tones. Then we combine the high-frequency part of input foreground image with the low-frequency component of the initial relighting result and feed them into a Modulator, which takes image-to-image generation paradigm and is trained to predict a set of coefficients to reorganize these information. The reorganization process can be formulated as: Figure 4: Process of Spectral Foreground Fixer. The modulator is utilized to predict a set of coefficients for modulating the textures of foreground with the light of background. $$ \begin{array} { r l } & { \alpha , \beta = \mathcal { M } ( H Q _ { i n } , L Q _ { o u t } ) , } \\ & { H Q _ { i n } ^ { \prime } = H Q _ { i n } * \alpha + \beta , } \\ & { I _ { o u t } ^ { \prime } = H Q _ { i n } ^ { \prime } + L Q _ { o u t } , } \end{array} $$ where $H Q$ and $L Q$ denote the high-frequency and low-frequency part extracted by Wavelet Transform, respectively. $\mathcal { M }$ is the modulator. $\alpha \doteq \mathbb { R } ^ { H \times W \times 3 }$ and $\beta \doteq \mathbb { R } ^ { \star \star W \times 3 }$ are the predicted modulation coefficients. $\alpha$ is utilized to control the influence degree of foreground texture and $\beta$ contains the balance information for harmonizing the foreground. Compared to directly reorganize the foreground texture and background semantics, such design helps to output more consistent and natural results, avoiding artifacts from forced combination. Finally, we replace the foreground region of initial relighting images by the predicted $I _ { o u t } ^ { \prime }$ according to the foreground mask. The modulator $M$ is individually trained in a self-supervised manner. Specifically, we perform random color transformation on arbitrary natural images to obtain pseudo pairs of relighting data, where the transformed image is input and original image serves as target. Then we extract the highfrequency component of the transformed image and the low-frequency component of the original image as inputs to the modulator for modulation coefficient prediction. The modulator is expected to adaptively combine the high-frequency and low-frequency parts from different source and avoid potential color noise or artifacts. We utilize MSE and perceptual loss [43] to supervise the learning of the modulator. Furthermore, to promote training stability and improve coordination, the supervision is applied on both the predicted high-frequency part $H Q _ { i n } ^ { \prime }$ and entire output image $I _ { o u t } ^ { \prime }$ . # 3.4 Data Generation We design data generation pipeline to facilitate the training of our model. Our data has three sources. Firstly, we construct pairs by training a relighting ominicontrol [44] lora in a bootstrapping manner, i.e., training – incorporating results into training set – continue training. The initial set consists of 100 image pairs collected from time-lapse photography videos and self-photographed photos. After each training, the model is used to relight vanilla images. High quality pairs are selected and incorporated into the training set for continue training. We will open source this relighting lora to benefit community. Secondly, we utilize available 3D assets [45] to render a number of consistent images with lighting of different color and directions. We construct an automatically rendering pipeline on 3D Arnold Renderer, and generate various lighting effects with random light sources and HDR images for corresponding pairs. Finally, to enhance data diversity, we also process vanilla images with IC-Light [22] and filter out high-quality synthetic data pairs with aesthetic score [46]. The prompts are generated through a two-step process: GPT-4 [47] initially brainstorms over 200 fundamental scenarios, which are then tailored by LLaVA [48] according to the main subjects present in the images. Totally, the quantities of the three types of data are about 600k, 150k, and $3 0 0 \mathrm { k }$ , respectively. Please see the supplementary material for detailed analysis about the training data. Figure 5: Image-based relighting results. (a) and (b) demonstrate the comparisons with popular available image-based relighting methods, i.e., Harmonizer [34], INR [17], PCT [16], and ICLight [22]. (c) shows the generalizability of our DreamLight to foregrounds of different categories and various styles of images. Please see the supplementary material for more qualitative results. # 4 Experiment # 4.1 Implementation Details Our model is implemented in PyTorch [49] using $8 \times \mathrm { A 1 0 0 }$ at $5 1 2 \times 5 1 2$ resolutions. The main model and fixer model are trained separately. The main model is trained end-to-end with the batch size of 512. The learning rate is set to 5e-5. We leverage StableDiffusion-v1.5 [1] as the base generative model and CLIP-H [50] as the encoder for position-guided light adapter. Following [22], we takes RMBG-1.4 as the segmentation model for extracting the region of subject. The spectral foreground fixer is finetuned on the VAE model of StableDiffusion-v1.5. The cutoff frequency of spectral filter is set to 5 in default. The number of light query is set to 4 for each direction. The evaluation benchmark contains 600 high-quality image pairs rendered by Arnold Renderer from real objects. For image-based relighting, we take the popular metrics of standard Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), Learned Perceptual Image Patch Similarity (LPIPS) [51], and image similarity score calculated by CLIP [50] (CLIP-IS) to verify the effectiveness of our method. For text-based relighting, we utilize image-text matching CLIP score, aesthetic score [46], and Image Reward (IR) [52] score to assess the plausibility of the generated results. IR score is calculated by text-to-image human preference evaluation models trained on large-scale datasets of human preference choices. Aesthetic score is a linear model trained on image quality rating pairs of real images. Please refer to the supplementary material for more details and illustrations. # 4.2 Main Results Qualitative Comparison: In Figure 5 and Figure 6, we present relighting results of our method and the comparison with existing methods. Our DreamLight achieves excellent performance of both image-based and text-based relighting in a single model. More relighting results can be found in the supplementary materials. From (a) and (b) in Figure 5 we can see that our method not only harmonizes the lighting of the foreground and background, but also more effectively models the interaction between light sources and objects within images. In addition to human and natural backgrounds, we illustrate the generalization ability of our model for foregrounds of different categories and other stylistic images in (c). In Figure 6 we compare our method with IC-Light [22] and existing state-of-the-art prompt-guided inpainting methods since existing studies have paid limited attention to relighting based on given prompts. Although inpainting-based methods are capable of generating backgrounds that are in accordance with text prompts, they exhibit a propensity towards maintaining the color and lighting of the foreground unchanged, leading to an incongruity with the background. IC-Light, in contrast, grapples with issues of excessive color variations in the generated images as well as distortions of the foreground. Our approach can produce relighting results with more natural light. Additionally, the bottom row illustrates the capacity of our method to maintain consistency for the subject. We can observe that all other methods generate results with significant facial distortion, whereas ours ensures excellent subject consistency. Prompt: Sunset over sea Figure 6: Results of text-based relighting. PowerPaint [39] and BrushNet [38] are powerful inpainting methods that can perform prompt-guided inpainting. To better illustrate, we demarcate the foreground, outpainting methods, and relighting methods with dashed lines. Table 2: Quantitative results about text-based relighting. PP denotes PowerPoint. Table 1: Quantitative results comparison about image-based relighting. Quantitative Comparison: Table 1 and Table 2 display the evaluation metrics of different methods about image-based and text-based relighting, respectively. Table 1 indicates that our approach can yield more consistent and harmonious outcomes when provided with a target background image. Results in Table 2 show that our DreamLight exhibits advantages in terms of vision-text compatibility, aesthetic appeal, and rationality. Results of user study are reported in the supplementary materials. # 4.3 Case Study Position-Guided Light Adapter: In Figure 7 we conduct visualization comparison of the ablations about the PGLA. As depicted in Figure 7, the model struggles to learn the light interaction between the foreground and background without any prior imposition (W/o adapter). When applying vanilla IPAdapter [41], which performs arbitrary interaction between subject and background, information from diverse directions in the background interferes with each other, thereby affecting the final relighting Table 3: Results of different light adapter designs. “IP.A." means IP-Adapter [41]. Filter is the spectral filter used to enhance the lowfrequency component of background. performance. Through the utilization of direction-biased masked attention, the model selectively transmits background lighting information, enabling the foreground to acquire lighting that is harmonious with the background (the last two columns). The design of low-frequency enhancement further discards irrelevant information exist in the background textures, thus contributing to robust training of the model and promoting generation of more natural and consistent results. Quantitative results in Table 3 also demonstrate the rationality of our designs. Figure 7: Visualization comparisons with different light adapter design strategies. Figure 8: Visualization comparisons with different foreground fixer strategies. For the ease of observation, we accentuate the changes in the facial region on the right side of each relighting result. The green box above showcases the magnified facial region of the relighted results. The red box below presents the magnified facial region of the input foreground image. Best viewed zoom-in. Spectral Foreground Fixer: Figure 8 shows the qualitative analysis about the proposed SFF. The introduction of ControlNet [53] to provide subject information fails to alleviate the issue of foreground distortion, as distortions often occur on small areas and such design struggles to avoid the problem of encoding information loss. Our SFF achieves robust foreground preservation effects, as also substantiated by the quantitative results in Table 4. As mentioned above, SFF primarily works on critical small regions. While the refinement of these regions is important for visual quality, it has limited impact on global metrics. Thus, to better test the refinement effect, in Table 4 we crop small face regions for metric calculation. Table 4: Different foreground fix strategies. Handle Both Conditions: We observe that thanks to the unified learning of text and image conditions in a single model, our DreamLight can generate results with the guidance of both conditions. As shown in Figure 9, our method can maintain the structure and elements of the given background while making additional adjustments based on text prompts. Note that this ability is emergent and our model does not undergo training for such situation. Figure 9: Visualization results of our DreamLight conditioned on both text prompt and background image.
We introduce a model named DreamLight for universal image relighting in this work, which can seamlessly composite subjects into a new background while maintaining aesthetic uniformity in terms of lighting and color tone. The background can be specified by natural images (image-based relighting) or generated from unlimited text prompts (text-based relighting). Existing studies primarily focus on image-based relighting, while with scant exploration into text-based scenarios. Some works employ intricate disentanglement pipeline designs relying on environment maps to provide relevant information, which grapples with the expensive data cost required for intrinsic decomposition and light source. Other methods take this task as an image translation problem and perform pixel-level transformation with autoencoder architecture. While these methods have achieved decent harmonization effects, they struggle to generate realistic and natural light interaction effects between the foreground and background. To alleviate these challenges, we reorganize the input data into a unified format and leverage the semantic prior provided by the pretrained diffusion model to facilitate the generation of natural results. Moreover, we propose a Position-Guided Light Adapter (PGLA) that condenses light information from different directions in the background into designed light query embeddings, and modulates the foreground with direction-biased masked attention. In addition, we present a post-processing module named Spectral Foreground Fixer (SFF) to adaptively reorganize different frequency components of subject and relighted background, which helps enhance the consistency of foreground appearance. Extensive comparisons and user study demonstrate that our DreamLight achieves remarkable relighting performance.
[ "cs.CV" ]
# I. INTRODUCTION Robust 3D perception is a cornerstone of intelligent systems, enabling a wide range of capabilities from autonomous navigation [1], [2] and manipulation in robotics to augmented reality and scene understanding in consumer devices [3]. Specifically in robotics, accurate and efficient depth estimation is crucial for tasks such as obstacle avoidance, path planning, object recognition, and grasping. Depth estimation, the process of inferring the distance to objects and surfaces in a scene, remains a challenging problem, particularly in dynamic and unstructured environments. Traditionally, depth is acquired through stereo cameras [4], [5], multiple cameras [6], [7], or active depth sensors (e.g., time-of-flight, structured light). However, these methods often face limitations in practical applications, including calibration complexity [8], power consumption, cost, and environmental constraints [9], [1]. Consequently, there is growing interest in monocular depth estimation, a cost-effective and power-efficient alternative. However, monocular depth estimation from a single RGB image is a fundamentally illposed problem; a given 2D image can correspond to infinitely many 3D scenes. Even relative depth estimation methods [10], [11], [12], [13] struggle to determine accurate ordinal To address the first issue, we propose a new architecture named DiFuse-Net (disentangle then fuse) for RGB-DP based depth estimation. To perform disentangled and specialized processing, DiFuse-Net employs a two-branch encoder: the RGB encoder extracts global scene context, while the DP encoder leverages a novel Window Bi-directional Parallax Attention Module (WBiPAM) for effective disparity matching. A dynamic fusion module then combines information from corresponding RGB and DP encoder layers for accurate depth estimation. This design also makes DiFuse-Net robust to changes in DP disparity range across different camera apertures. To address the second issue, we propose a Cross-modal Transfer Learning (CmTL) strategy. The modality decoupled design of DiFuse-Net presents a unique opportunity for leveraging existing large-scale RGB-D datasets to enhance depth estimation generalization and performance. The proposed $\mathrm { C m T L }$ approach comprises of three training stages to exploit this advantage. Additionally, since there is no RGB-DP-D dataset with high-quality ground-truth depth, we present a new method to overcome this challenge using a symmetric stereo setup with two smartphones. Acknowledging the non-rigid nature of smartphone lens systems, we employ regular calibration and rectification protocol for each capture session. We then train an AI stereo disparity estimation method on synthetic data to generate high-quality ground-truth depth maps, ensuring the method is robust to minor rectification errors $\cdot < 3$ pixels) inevitable in our setup. Our scalable dataset capture setup and method leads to 5000 novel training samples, and 700 novel test samples, which is one of its kind in the literature. Fig. 2 shows samples from our new dataset named DualCamera Dual-Pixel (DCDP) dataset. Fig. 1. DP disparities from a sample captured with a Google Pixel 3 smartphone. Note: (1). We refer to the DP images as ‘left’ and ‘right’ for consistency with prior DP literature, which often utilizes DSLR cameras, although the physical disparity in most smartphones is vertical due to the sensor design and arrangement. (2). The grayscale representation of DP images reflects the limitation of DP sensing to the green channel of the Bayer sensor in smartphones for cost savings. # Following are the major contributions of this work: 1) A new architecture for RGB-DP based depth estimation that decouples the processing of RGB and DP images for adequate global scene understanding from the RGB image and specialized processing of the DP images to discern the defocus disparity of each pixel. 2) A new WBiPAM based siamese encoder for processing the DP images to extract the defocus disparity information at multiple scales effectively. 3) A dynamic fusion module to fuse the information from the RGB and DP modality feature maps effectively. 4) CmTL mechanism to exploit the depth estimation prior from large-scale RGB-D datasets to address the problem of obtaining large-scale RGB-DP-D datasets. 5) A new and scalable method to obtain RGB-DP-D dataset with high quality depth ground-truth using a custom symmetric stereo camera setup made with two smartphone devices. The ground-truth depth maps in our dataset are significantly dense and have better quality than the existing dataset in the literature [11]. Our new dataset named DCDP, containing 5000 novel training samples, and 700 novel test samples, will support future research on RGB-DP based depth estimation. # II. RELATED WORK Garg et al. [11] were the first to propose an RGB-DP based depth estimation method. The authors designed a lightweight network architecture named DPNet, which takes the channelwise concatenated RGB and DP images as input and is trained end-to-end using an affine invariant loss function. To acquire the training dataset, they created a custom camera rig using five Google Pixel 3 smartphones, and the ground-truth depth map was computed using multi-view stereo techniques. The ground-truth depth maps in their dataset are sparse and often erroneous, as shown in Fig. 2. Therefore, it limits the accuracy and quality of the depth estimation. Moreover, the authors do not focus on specialized processing of the DP images to compute the defocus disparity information, which makes it less effective. Fig. 2. Qualitative difference between random RGB-D samples from our dataset vs dataset in [11]. As visually evident, our new DCDP dataset exhibits superior ground-truth quality, particularly in terms of density, sharpness, and boundary delineation. Zhang et al. [17] extended the method in [11] by additionally using dual cameras. They kept the baseline of the dual cameras orthogonal to the DP baseline to obtain complementary disparity information from the two sources for more accurate depth estimation. Pan et al. [18] proposed a method for DP image simulation for DSLR images and trained a multi-task model to estimate depth and perform image deblurring simultaneously. In [19], the authors proposed a method for modeling defocus disparity in DSLR DP images, operating on patches and stitching the outputs to recover the disparity map. The authors also provide 100 RGB-DP-Disparity triplets of simple lab scenes using a DSLR, whereas some handful RGB-DP images from Pixel smartphone were used only for qualitative evaluation. As it is pointed in [19], it is important to note that smartphone and DSLR DPs are significantly different, therefore, model trained on DSLR dataset cannot be evaluated on smartphone dataset and vice-versa. In this work, we focus on explicitly and effectively utilizing the DP defocus disparity cues. More specifically, we decouple the processing of the RGB and DP images to exploit the information from each modality effectively. # III. PROPOSED METHOD Fig. 3 shows the detailed architecture of the proposed method named DiFuse-Net. Given an RGB image $\textit { I } \in$ $\mathcal { R } ^ { H \times W \times 3 }$ , and DP image pair $I _ { l } , I _ { r } \in \ R ^ { H \times W \times 1 }$ , the objective is to predict a relative depth map $\hat { d } \in \mathcal { R } ^ { H \times W \times 1 }$ of the scene represented by $I$ . Here, $H$ and $W$ represent the spatial height and width of the image, respectively. Our investigations and ablation study reveal that a na¨ıve concatenation of the RGB and DP images as input to the network results in sub-optimal performance because RGB and DP modalities contain distinct information. While RGB images provide the global context for holistic scene understanding, DP images provide localized disparity cues that assist in resolving the ordinal relationships of the pixels, leading to accurate depth prediction. This understanding motivates us to adopt an independent processing of these two modalities. However, stereo matching is known to be less reliable in textureless regions [5], [8], implying that it is better to rely on global contextual information from the RGB image for these regions. We intend to adaptively combine the RGB and DP modality information in the feature space, enabling the model to leverage the strengths of each modality while mitigating their respective limitations. Fig. 3. Architecture of the proposed DiFuse-Net model. Zoom-in using Adobe Reader. As shown in Fig. 3, DiFuse-Net contains two distinct encoding modules: one encoder for encoding the RGB image and a siamese encoder for encoding the DP image pair. The siamese encoder contains the WBiPAM module for effective correspondence matching between the DP images. The fusion module adaptively fuses the RGB and DP modality information, which is passed to the successive blocks in the encoder. A UNet-style decoder [20] finally predicts the depth map. In the following subsections, we explain each of these modules of DiFuse-Net in detail. # A. RGB Encoder The RGB encoder processes the input images $\textit { I } \in$ $\mathcal { R } ^ { H \times W \times 3 }$ . It employs an EfficientNet-Lite3 backbone [21], initialized with pre-trained ImageNet [22] weights, for feature extraction. The network progressively downsamples the image by a factor of 32 using convolutional blocks. Each block halves the resolution and increases the number of channels while utilizing inverted residual blocks [23] for efficient computation. We incorporate an additional inverted residual block for further downsampling to $1 / 6 4$ of the original size, enlarging the receptive field and enhancing the global contextual information. # B. DP Encoder The DP encoder is designed to extract defocus disparity cues from DP image pairs. Exploiting the fact that DP images exhibit disparities along the epipolar line, we employ modified parallax attention [24] oriented along this axis to capture these cues effectively. Due to the limited disparity range (typically $^ { - 8 }$ to $^ { + 8 }$ pixels) inherent to smartphone cameras with small apertures, correspondence matching is constrained to a local neighborhood. Formally, the left and right DP images, $I _ { l }$ and $I _ { r } \in \mathbf { \Sigma }$ RH×W , are processed by a siamese encoder comprising two shallow feature extraction modules (see Fig. 3). Each module incorporates an inverted residual block [23] to mitigate spatial information loss during feature extraction—crucial given the limited spatial extent of disparity cues in DP images [11]. Each block downsamples the input by a factor of 2. The resulting DP left and right feature maps from a block are denoted as $F _ { l }$ and $F _ { r } \in \mathsf { \bar { R } } ^ { H _ { f } \times W _ { f } \times C }$ , respectively, where $C$ feosrentthse hfierstnubmlobcekr oafncdh $\begin{array} { r } { H _ { f } = \frac { H } { 2 } } \end{array}$ $W _ { f } =$ $\frac { { \cal { W } } } { 2 }$ $\begin{array} { r } { { H _ { f } } = \frac { H } { 4 } } \end{array}$ $\begin{array} { r } { \dot { W } _ { f } \dot { = } \frac { W } { 4 } } \end{array}$ second. To preserve disparity information, we limit the DP encoder to two blocks, as excessive downsampling can lead to information loss and impact performance (see Section VIB). Subsequently, the WBiPAM module processes $F _ { l }$ and $F _ { r }$ from each block independently to extract local defocus disparity cues. This approach effectively leverages the DP encoder to capture the rich disparity information present in DP image pairs. # C. WBiPAM Module Fig. 4 shows the detailed design of the WBiPAM module. The proposed WBiPAM module offers a novel mechanism for effective disparity cue extraction from DP image pairs. It operates on DP image features, represented as $F _ { l }$ and $F _ { r } \in \mathcal { R } ^ { H _ { f } \times W _ { f } \times C }$ . Since DP disparity is confined to a fixed range across the epipolar line, we consider a window (rectangular) within this line instead of considering the entire line. We consider a rectangular window of size $k \times 1$ . To implement this, first a reshape operation is performed to divide the feature maps into $k \times k$ non-overlapping windows, restructuring $F _ { l }$ and $F _ { r }$ into feature maps of shape R( Hkf × Wk ) × k × k × C. Fl and Fr are reformulated as fWeiantudroew mParptsi of bdliomceknssinon $\mathcal { R } ^ { P \times k \times C }$ $\begin{array} { r } { P = \frac { H _ { f } ^ { \bf 1 } } { k } \times \frac { W _ { f } } { k } \times k } \end{array}$ is merged with the batch dimension. The transformed feature map of the shape $\mathcal { R } ^ { P \times k \times C }$ , is essentially a stack of $k \times 1$ sized windows with $C$ channels. Subsequently, a residual convolution module processes $F _ { l }$ and $F _ { r }$ , preserving spatial and channel dimensions to retain crucial feature information. The core of the WBiPAM module is its window attention mechanism. With $F _ { l }$ as the reference, learnable projections $W _ { q }$ and $W _ { k }$ generate query and key matrices from $F _ { l }$ and $F _ { r }$ , respectively. This facilitates the computation of cross-attention scores, encapsulated in the attention score $\mathbf { \mathcal { A } } _ { l r }$ through the operation softmax $( Q K ^ { T } )$ . This attention score modulates the feature map $F _ { l }$ , ultimately producing $F _ { l } ^ { \prime }$ . $$ Q = W _ { q } . F _ { l } $$ $$ K = W _ { k } . F _ { r } $$ $$ \mathcal { A } _ { l r } = \operatorname { s o f t m a x } ( Q K ^ { T } ) $$ $$ F _ { l } ^ { a } = \mathcal { A } _ { l r } . F _ { l } $$ $F _ { l } ^ { a }$ is concatenated with the original feature map $F _ { l }$ , and subsequently processed by a convolution block, producing the final output of the WBiPAM module for the left feature map, denoted as Fl′ ∈ RP ×k×C. Fig. 4. Design of the proposed WBiPAM module. Zoom-in using Adobe Reader. Analogously, employing the right feature map $F _ { r }$ as the reference, an identical procedure yields the final output for the right feature map, denoted as $\mathbf { \bar { \boldsymbol { F } } } _ { r } ^ { \prime } \in \mathcal { R } ^ { P \times k \times C }$ . $$ \mathcal { A } _ { r l } = \mathcal { A } _ { l r } ^ { T } $$ $$ F _ { r } ^ { a } = \mathcal { A } _ { r l } . F _ { r } $$ Subsequently, $F _ { l } ^ { \prime }$ and $F _ { r } ^ { \prime }$ undergo window merging, restructuring them into the original spatial dimensions RHf ×Wf ×C (see Window Merging blocks in Fig. 4). These refined feature maps are then fused with the encoding produced by the RGB encoder. As stated earlier, we limit the DP encoder to two blocks, as excessive downsampling can lead to information loss. Our empirical analysis demonstrates that the WBiPAM module significantly enhances depth estimation performance by computing attention scores based on both left and right images (see Section VI-B). # D. Fusion Module To enrich the RGB encoder’s representation with local disparity cues extracted by the DP encoder, we propose a fusion mechanism that integrates $F _ { l } ^ { \prime }$ , $F _ { r } ^ { \prime }$ with the corresponding RGB encoding, denoted as $F _ { i }$ . Part (b) in Fig. 3 demonstrates the fusion mechanism. Importantly, $F _ { l } ^ { \prime } , \ F _ { r } ^ { \prime }$ , and $F _ { i }$ share identical dimensions $\mathcal { R } ^ { H _ { f } \times W _ { f } \times C }$ , where $H _ { f }$ , $\textstyle W _ { f }$ , and $C$ represent the height, width, and channel count, respectively. The fusion process initiates with channel-wise concatenation of $F _ { l } ^ { \prime } , \ F _ { r } ^ { \prime } .$ , and $F _ { i }$ , yielding a feature map of dimensions RHf ×Wf ×(3×C). Subsequently, this concatenated representation is modulated by two convolutional layers to predict a feature-wise score $\mathbf { \bar { \Phi } } _ { A _ { f } } \in \mathbf { \Phi } \mathbf { \mathcal { R } } ^ { H _ { f } \times W _ { f } \times 3 }$ . In $A _ { f }$ , the first, the second, and the third channel represent scaling weight for $F _ { l } ^ { \prime } , F _ { r } ^ { \prime }$ , and $F _ { i }$ feature maps, respectively. This featurewise computation allows for adaptive recalibration of feature maps based on their importance for depth estimation. The computed $A _ { f }$ is then employed to generate weighted feature maps, $F _ { i l r }$ , which are further processed by a convolutional module to produce the final fused feature map ${ \cal F } _ { i l r } ^ { \prime } \in$ $\mathcal { R } ^ { H _ { f } \times W _ { f } \times C }$ . Due to the shallow nature of the DP encoder (two layers), the fusion of DP encodings is limited to the first two RGB encoding blocks. # E. Decoder Complementary to the encoder, the decoder progressively upsamples the feature resolution while reducing the channel count in each subsequent decoding block. Each block employs upsampling layers followed by $2 D$ convolutions with PReLU activation. The final output layer utilizes a sigmoid activation to produce the depth map. To facilitate gradient flow and leverage crucial low-level information, skip connections are established between corresponding RGB encoding and decoding blocks, following the UNet [20] design. The feature map resolution is doubled within each decoder block, and the number of channels is halved. The decoder generates a depth map $\hat { d } \in \mathcal { R } ^ { H \times W \times 1 }$ , along with intermediate predictions from each decoding block, thus enabling deep supervision during training. # F. CmTL Mechanism While RGB-D datasets are abundant [9], [1], [10], [25], [12], [13], RGB-DP-D datasets are scarce. However, DiFuseNet’s modality decoupled design presents a unique opportunity: leveraging existing large-scale RGB-D datasets to enhance generalization and performance. We propose a novel cross-model transfer learning (CmTL) approach comprising three stages to exploit this advantage. Stage 1: The DP Encoder and Decoder subnetworks are trained end-to-end using DP-D pairs from the RGB-DP-D dataset. This stage focuses on optimizing the DP Encoder to capture the unique characteristics and challenges of depth estimation from DP images. Stage 2: The RGB Encoder and Decoder subnetworks are trained end-to-end on large-scale RGB-D datasets [9], [1], [10], [25], [12], [13]. Critically, training on these extensive datasets—as opposed to the smaller RGB-DP-D dataset— yields significantly improved accuracy and robustness in the RGB Encoder, capitalizing on the wealth of diverse scenes and depth variations present in these resources. Stage 3: The complete DiFuse-Net is trained end-to-end, initializing the DP and RGB Encoders with weights from stages 1 and 2, respectively. The Fusion module and the Decoder weights are initialized randomly [26]. This knowledge transfer facilitates effective joint learning from both modalities. Empirical results (see Tab. I) demonstrate that $\mathrm { C m T L }$ significantly boosts RGB-DP based depth estimation. # IV. DUAL-CAMERA DUAL-PIXEL DATASET GENERATION The Google Pixel $2 / 3$ dataset [11], while valuable, suffers from sparse and often erroneous ground-truth depth (Fig. 2). To address this, we propose a methodology for constructing a high-quality RGB-DP-D dataset. Existing depth sensors [9], [1] lack DP capture and pose alignment challenges. Unlike the complex multi-camera setup in [11], we employ a simplified symmetric stereo configuration using two DP sensor enabled smartphones, leveraging recent AI stereo disparity estimation advancements [4], [5]. Our setup prioritizes two key criteria for high-quality data: 1) Minimal DP data processing to preserve subtle defocus disparity cues. 2) Dense, accurate ground-truth depth maps aligned with RGB-DP images. # A. Symmetric Stereo Camera Setup Symmetric stereo cameras simplify calibration and rectification compared to wide/telephoto combinations [8]. We employed two Samsung Galaxy S23 Ultra smartphones in a symmetric stereo configuration (Fig. 5), using the front cameras to minimize baseline $( 2 . 5 c m )$ and occlusions. Rigid camera holders and a metal support ensured stability. Simultaneous capture was achieved using Galaxy S-pens and USB-C camera switch. Fig. 5. Our symmetric stereo camera setup for data acquisition. Fig. 6. Top row: Checkerboard images captured in the begging of a capture session. Bottom row: Sample stereo captures. # B. Stereo Rectification Our camera setup primarily exhibits horizontal disparity. To ensure precise alignment, we perform stereo calibration and rectification before each capture session, utilizing 30-40 checkerboard samples. Each session yields 120-150 images (Fig. 6). Such a routine protocol mitigates potential positional shifts in long capture sessions. Calibration and rectification are performed at half-resolution, with the resulting coordinate mapping applied to full-resolution images. Rectified stereo pairs are processed through an AI stereo disparity estimation pipeline. To address the misalignment between the estimated disparity map (in the rectified plane) and the captured DP images, we project the disparity map back to the original plane, thus preserving DP cues. This is preferable to rectifying DP images, which would distort DP cues. Although reversing distortion is less precise than inverting pose parameters, our minimal distortion coefficients allow us to approximate reverse distortion by inverting their signs. A 40-pixel border crop mitigates any residual inaccuracies (Fig. 7). # C. Ground-truth Depth Generation We leverage advances in AI stereo disparity estimation [4], [5] to generate accurate disparity maps from rectified stereo pairs. Our model, trained on a large synthetic dataset [27] augmented with slight vertical distortions, ensures robustness to minor rectification errors. Back-projecting the estimated disparity map and applying a 40-pixel border crop yields aligned RGB-DP-D samples with preserved DP cues. To ensure dataset quality, we employ manual annotation (Fig. 8) to mask visually incorrect disparity estimations. This meticulous process guarantees high-quality ground truth, TTTH 王 王NΛ(a) (b) (c) (d) Fig. 7. (a) Stereo pairs. (b) Rectified stereo pairs. (c) Estimated disparity map in the rectified plane. (d) Disparity map projected back to the original plane along with 40-pixel border crop (the same crop is later applied to original RGB-DP images for alignment). Fig. 8. Example annotations from our manual quality control process. The binary masks indicate regions of significant error in the ground-truth disparity map, identified through visual inspection. The masked regions are excluded from loss computation during training. Fig. 9. Qualitative assessment of the ground-truth depth accuracy in the DCDP dataset. Point clouds are rendered from a novel viewpoint to highlight the geometric consistency in the reconstructed 3D structure. with masked regions excluded from loss computation during training. Finally, Fig. 9 provides a qualitative assessment of the ground-truth depth accuracy in the DCDP dataset. By rendering the point clouds from a novel viewpoint, we highlight the geometric consistency and absence of artifacts in the reconstructed 3D structure. # V. EXPERIMENTS Our experimental evaluation comprises two primary components: (i) assessment and benchmarking of the proposed DiFuse-Net method, and (ii) evaluation on our new highquality DCDP dataset. For a fair comparison with prior work [11], we initially train and evaluate DiFuse-Net on the publicly available Google DP dataset [11]. Additionally, we conduct an ablation study on the same dataset to analyze the contribution of individual components within DiFuseNet. Finally, we present results showcasing the advantages of utilizing our DCDP dataset for training, along with benchmarking results on the DCDP test set. # A. Datasets Google DP Dataset: Google DP dataset [11] consists of 12, 530 training and 684 test RGB-DP-D samples. Since each scene is captured by 5 cameras in the multi-camera rig, effectively there are 2506 novel images in this dataset. DCDP: Our DCDP dataset consists of 5000 novel training and 700 novel test RGB-DP-D samples. # B. Implementation We used the PyTorch [28] framework for implementation. For training, we used Adam optimizer [29] with momentum of 0.9. We employed a polynomial learning rate decay scheduler with power term set to 0.9, and an initial learning rate of $1 \times 1 0 ^ { - 4 }$ . Similar to the loss function in [13], we combine the scale-invariant mean absolute error with a scaleinvariant gradient matching term. The gradient matching term helps to preserve sharp edges and boundaries [13]. Also, similar to [11] the model is trained to predict the inverse depth. $$ \mathcal { L } = M A E ( d , \hat { d } ) + \lambda \times G r a d ( d , \hat { d } ) $$ In Eq. 3, $d$ signifies the ground-truth inverse depth, whereas $\hat { d }$ denotes the predicted affine invariant depth. $\lambda$ is a hyper-parameter, set to 30 across all experiments. MAE( ) and Grad( ) represent the mean absolute error and the gradient loss functions, respectively. # C. Evaluation Metrics Similar to [11], we use Spearman’s Rank Correlation Coefficient (SRCC) to evaluate the ordinal correctness of the estimated depth map with ground-truth depth, and affineinvariant versions of mean absolute error (MAE) and root mean squared error (RMSE), denoted by AIWE 1 and AIWE 2 respectively. # VI. RESULTS AND DISCUSSIONS # A. Evaluation and Comparison Tab. I shows detailed quantitative comparison of DiFuseNet with other baselines, viz., DPNet [11], Baseline, and Stereo Baseline [30]. We report the latest DPNet metrics provided on their official GitHub page due to their release of modified train and test datasets. This ensures a fair evaluation of our method against DPNet. Baseline is the enhanced DPNet model by increasing the convolutional layers, aligning the total number of parameters with that of DiFuse-Net, to ensure a more fair comparison. Stereo Baseline replaces WBiPAM with a traditional stereo matching cost-volume approach [30], [5], [8], providing a comparison with established stereo methods. Total number of parameters of Stereo Baseline are aligned with that of DiFuse-Net to ensure a fair comparison. TABLE IQUANTITATIVE COMPARISON OF DIFUSE-NET ON GOOGLE DPDATASET. As it can be seen in Tab. I, the DiFuse-Net model outperforms other models in all the metrics, demonstrating the efficacy of our approach in disentangling RGB and DP images, WBiPAM module and CmTL mechanism. Fig. 10 presents the qualitative comparison between DPNet, Baseline, Stereo Baseline, DiFuse-Net w/o CmTL and DiFuse-Net methods. As previously discussed, in images containing texture-less regions, DP images often lack discernible disparity, resulting in poor depth output within these regions. However, our approach of decoupling RGB and DP images enables the model to extract global information more effectively from RGB images, consequently mitigating the instances of poor depth output substantially. # B. Ablation Study In our ablation study, we systematically investigate the design choices in the WBiPAM module, Fusion module, and the DP Encoder network depth. Tab. II presents the quantitative results. To ensure a fair comparison, the number of parameters in the remaining network components is adjusted for each ablative variant. First, we ablate the WBiPAM module as follows. No WBiPAM: This variant removes the WBiPAM module entirely, highlighting its contribution to performance. No Window WBiPAM: Here, we remove the window partitioning within WBiPAM, allowing cross-attention to operate across the entire feature map, thus evaluating the importance of local context. Unidirectional WBiPAM: This variant uses only the left DP features $F _ { l }$ as input to WBiPAM, assessing the benefits of bidirectional attention. The results in Tab. II clearly demonstrate that WBiPAM, with its windowed bidirectional attention mechanism, outperforms all other variants. Next, our exploration of fusion strategies reveals that feature-wise fusion of modalities (as explained in Sec. III-D) is superior to more granular pixel-wise (recalibration of each pixel) or channel-wise (recalibration of each channel) fusion. Regarding the DP Encoder, a two-layer architecture proves optimal. We hypothesize that additional downsampling layers lead to the loss of subtle defocus disparity cues, hindering performance. Finally, ablation study on the CmTL mechanism shows that the DiFuse-Net model additionally pre-trained on largescale RGB-D datasets using our CmTL mechanism obtains the superior performance. Fig. 10. Qualitative evaluation of DiFuse-Net with baseline methods, viz., DPNet [11], Baseline, Stereo Baseline [30], and DiFuse-Net w/o CmTL on Google DP dataset. TABLE IIQUANTITATIVE RESULTS OF OUR ABLATION STUDY ON GOOGLE DPDATASET. TABLE III ADDITIONAL COMPARISON OF DIFUSE-NET WITH MONOCULAR DEPTH ESTIMATION METHODS TRAINED ON LARGE AND DIVERSE RGB-D DATASETS, GOOGLE DP TEST DATASET IS USED FOR THIS EVALUATION. # C. Additional Evaluation We conducted a comparative analysis between DiFuse-Net and monocular depth estimation models trained on largescale RGB-D datasets, viz., MiDaS v3.1 $\mathrm { B E i T _ { L - 5 1 2 } }$ [13] which comprises of 345M parameters, and ZoeDepth [31]. ZoeDepth uses MiDaS as backbone, and uses a metric depth prediction head. Best numbers were obtained by the ZoeD-M12-N (trained on NYU V2) variant. Our analysis revealed that despite DiFuse-Net having substantially fewer paramaters (9.9M), its depth map quality was better than that of MiDaS, owing to the effective use of DP images in an improved model architecture. Table III shows the quantitative score comparison of DiFuse-Net with MiDaS, and ZoeDepth indicating better scores for our model. The qualitative comparisons are presented in 11. where our model exhibits enhanced details. Fig. 11. Qualitative comparison of DiFuse-Net with MiDaS. # D. Evaluation on the DCDP Dataset We report the benefit of using our DCDP dataset for training a RGB-DP based depth estimation model. Fig. 12 shows that a DCDP trained model produces sharper and smoother depth maps due to our dense, high-quality ground-truth. Tab. IV additionally provides the quantitative benchmarking on our new DCDP dataset. Fig. 12. DCDP trained (first) vs DPNet trained (second) output. Qualitative comparison of DiFuse-Net trained on DCDP vs Google DP dataset to showcase the advantages of our DCDP dataset. It can be observed that the results in (b) exhibit depth leakage at object boundaries, with gaps erroneously filled. In contrast, the DCDP trained model consistently demonstrates sharper boundaries, preserves fine-grained gaps, and exhibits superior detail in thin structures, underscoring the benefits of our high-quality dataset. TABLE IV BENCHMARKING ON OUR DCDP DATASET.
Depth estimation is crucial for intelligent systems, enabling applications from autonomous navigation to augmented reality. While traditional stereo and active depth sensors have limitations in cost, power, and robustness, dual-pixel (DP) technology, ubiquitous in modern cameras, offers a compelling alternative. This paper introduces DiFuse-Net, a novel modality decoupled network design for disentangled RGB and DP based depth estimation. DiFuse-Net features a window bi-directional parallax attention mechanism (WBiPAM) specifically designed to capture the subtle DP disparity cues unique to smartphone cameras with small aperture. A separate encoder extracts contextual information from the RGB image, and these features are fused to enhance depth prediction. We also propose a Cross-modal Transfer Learning (CmTL) mechanism to utilize large-scale RGB-D datasets in the literature to cope with the limitations of obtaining large-scale RGB-DP-D dataset. Our evaluation and comparison of the proposed method demonstrates its superiority over the DP and stereo-based baseline methods. Additionally, we contribute a new, high-quality, real-world RGB-DP-D training dataset, named Dual-Camera Dual-Pixel (DCDP) dataset, created using our novel symmetric stereo camera hardware setup, stereo calibration and rectification protocol, and AI stereo disparity estimation method.
[ "cs.CV", "cs.RO" ]
# 1 Introduction Local consistency is an important concept in various areas such as Bayesian statistics, relational databases and quantum foundations. In broad terms, local consistency refers to a family of partial structures (such as marginal distributions or projections) that agree on their overlapping parts. Local consistency is usually a desirable property to enforce or at least a minimal requirement or observed phenomenon. It is often contrasted with global consistency, which demands the existence of a single underlying structure whose projections yield the given family. In practice, local consistency sometimes serves as a proxy for global consistency, especially when the latter is infeasible to check or maintain explicitly. In Bayesian statistics, local consistency is central to the representation of global probability distributions via factorisation. For example, Bayesian networks encode a global distribution as a family of marginal and conditional distributions that must be consistent with each other. Such factorisations are very useful as they facilitate computationally feasibly probabilistic and approximate reasoning without the need to directly access or store the global distribution. In relational databases, local consistency arises when data is spread across multiple tables with overlapping attribute sets. The motivation here is not computational speed-up—storing everything in a single table would eliminate costly joins in query evaluation—but data integrity, as decomposing data helps to prevent anomalies, such as update anomalies. Notably, consistency in relational databases is typically enforced via integrity constraints that are local in nature. E.g., data about a person may be distributed across multiple tables, which we expect to be globally consistent; yet this consistency is usually maintained via primary keys and foreign keys that act on individual tables or pairs of tables only. In quantum theory, local consistency is a principle that places an upper bound on the outcomes appearing in physical experiments: any experimental setup must produce a family of probability distributions that satisfies local consistency. However, not all locally consistent families are compatible with quantum predictions. This intricate relationship between local and global consistency naturally gives rise to many interesting questions. In domains where local consistency is used as a proxy for global consistency, it is natural to ask: under what conditions does local consistency entail global consistency? If local consistency is a necessary but not sufficient condition for observable phenomena, one may query about the reasons that account for this gap. For instance, quantum theory offers remarkably accurate predictions on experimental outcomes, yet it may not provide a satisfactory explanation—in terms of fundamental principles or laws of nature —as to why certain locally consistent distributions are not physically realisable. Several efforts have been made to explain this discrepancy in terms of simple fundamental principles. A notable example is the principle of information causality (Pawlowski et al. 2009), which uses the language of information theory to constrain admissible distributions. In the context of databases and probability distributions, in turn, much effort has gone into characterising when local consistency guarantees global consistency. The so-called local-to-global consistency refers to the situation where the underlying structure (such as a database schema or collection of variable sets) guarantees that every locally consistent family (of relations or distributions) can be extended to a globally consistent one. The characterisations of this property are typically formulated in the language of graph theory. For relational databases it is known that a database schema satisfies the local-to-global consistency property if and only if it forms an acyclic hypergraph (Beeri et al. 1983). Similar characterisations have been also shown in the context of distributions and multisets (Vorob’ev 1962; Atserias and Kolaitis 2021). More recently, these results have been generalised in the setting of $K$ -relations, which are relations whose tuples are annotated with elements from some positive and commutative monoid (Atserias and Kolaitis 2024). In this case, the assumption of acyclicity of the schema is not always sufficient but a more involved characterisation is required. Our Contributions. In this article, we turn the tables and treat local consistency not as a proxy for global consistency, but as a foundational concept in its own right. Analogously to (Atserias and Kolaitis 2024), we adopt the useful abstraction of $K$ -relations that allows us to treat families of relations and distributions uniformly. We also redirect our focus from domains such as information theory and graph theory to logical consequence, and investigate how it is impacted by the shift from global to local consistency. As a test case, we study the logical entailment between functional dependencies (FDs). Our approach bears similarity to the logic of local inference of Kishida (2016) and to quantum team logic of (Hyttinen, Paolini, and Va¨a¨n¨anen 2015) which are geared for applications in quantum theory, where local consistency does not generally entail the existence of a global model (cf. Bell’s theorem). After defining our theoretical framework, we investigate when a given family of locally consistent relations $\mathcal { R }$ can be enriched to a family of locally consistent $K$ - relations (with the same support). Here, we give a complete characterisation for the case, where $K$ is cancellative and $\mathcal { R }$ is a contextual family over what we refer to as a chordless-cycle context set. Moreover, we establish a sound and complete axiomatisation for the entailment of unary FDs in the setting of locally consistent $K$ -relations. In particular, we observe that the usual transitivity rule of FDs becomes unsound in this context. Interestingly, the transitivity rule disperses into an infinite collection of weaker rules that we name chain and cycle rules. In the standard unirelational context these rules are all derivable from the transitivity rule, and one instance of a chain rule derives the transitivity rule. Finally, we establish that the derivability question of our axiomatisation can be decided in polynomial time. As an instantiation of our result, we obtain insight on the entailment of FDs in settings that lack global consistency, such as in quantum foundations. # 2 Preliminaries For a natural number $n$ , we write $[ n ] : = \{ 0 , 1 , \ldots , n - 1 \}$ . We fix a countably infinite set Var of variables. Given a set $A$ , an assignment (of $A$ ) is a function $s$ that maps a finite set $D \subseteq$ Var of variables to some values (in $A$ ). We call $D$ the domain of $s$ and write $\mathrm { D o m } ( s )$ . The set of all assignments $s \colon D \to A$ is denoted $\operatorname { A s } ( D , A )$ . A monoid is an algebraic structure $K = ( K , + , 0 )$ , where $^ +$ is associative and $0$ is the identity element of $^ +$ . $K$ is positive if $a + b = 0$ entails $a = 0 = b$ , for all $a , b \in$ $K$ , and (left) cancellative if $a + b = a + c$ entails $b = c$ . We associate each monoid $K$ with its natural order $\leq$ , defined by $a \leq b \iff \exists c : a + c = b$ . The natural order of a monoid is reflexive and transitive, meaning that it is a preorder. In this paper, $K$ will always denote a positive commutative non-trivial monoid (the trivial monoid contains only $0$ ). Given a finite set of variables $D \subseteq { \mathrm { V a r } }$ and a finite set $A$ , a $K$ -relation over $D$ is a function $R \colon \operatorname { A s } ( D , A ) \to K$ . We write ${ \mathrm { V a r s } } ( R )$ for the set of variables $D$ and $\operatorname { D o m } ( R )$ for the set $\operatorname { A s } ( D , A )$ . The support of $R$ is the relation $\operatorname { S u p p } ( R ) : = \{ s \in \operatorname { D o m } ( R ) \mid R ( s ) \neq 0 \}$ . For a relation $R$ and an element $c \in K$ of a monoid $K$ , we denote by $c R$ the $K$ -relation of support $R$ such that $( c R ) ( s ) = c$ for all $s \in R$ . For two $K$ -relations with the same domain $X$ , $R \colon X \to K$ and $S \colon X \to K$ , we define the $K$ -relation $T = R + S$ as follows: $T \colon X \to K$ and $T ( s ) = R ( s ) + S ( s )$ for all $s \in X$ . For a set of variables $D ^ { \prime }$ , the $K$ -relation $R \harpoonright D ^ { \prime }$ is the marginalisation of the $K$ -relation $R$ to the variables in $D ^ { \prime }$ . This means that $R \harpoonright D ^ { \prime }$ is a $K$ -relation $\operatorname { A s } ( D ^ { \prime } , A ) \to K$ s.t. $$ ( R \harpoonright _ { \mathbf { \Lambda } } ) ( s ) : = \sum _ { \mathbf { \Lambda } ^ { s ^ { \prime } } \in \operatorname { D o m } _ { s = s ^ { \prime } \mid D ^ { \prime } } } R ( s ^ { \prime } ) , $$ where $s \harpoonright D ^ { \prime }$ is the restriction of the assignment $s$ to $D ^ { \prime }$ and $\scriptstyle \sum$ is the aggregate sum of the monoid. In addition to the general case, we consider the Boolean monoid $\mathbb { B } ~ = ~ ( \{ 0 , 1 \} , \vee , 0 )$ and the monoids of non-negative reals $\mathbb { R } _ { \ge 0 } = ( [ 0 , \infty ) , + , 0 )$ and natural numbers $\mathbb { N } = ( \mathbb { N } , + , 0 )$ with their usual addition. # 3 $K$ -relations and Contextual Families In quantum information theory, a context is a set of attributes that can be measured together. More formally, a context $C$ is a set of variables and context set $\mathcal { C }$ is a downward closed set of contexts (i.e, ${ \textit { C } } \in { \textit { C } }$ and $C ^ { \prime } \subseteq C$ implies $C ^ { \prime } \in \mathcal { C }$ ). We often represent a context set $\boldsymbol { \mathscr { C } }$ in terms of its maximal elements; for instance, we may write $\mathcal { C } ~ = ~ \{ \{ x , y \} , \{ y , z \} \}$ to denote $\mathcal { C } = \{ \{ x , y \} , \{ y , z \} , \{ x \} , \{ y \} , \{ z \} , \emptyset \}$ . To remove clutter, we often write a context $\{ x _ { 1 } , \ldots , x _ { n } \}$ as $x _ { 1 } \ldots x _ { n }$ . Let $R$ and $S$ be $K$ -relations over sets of variables $D$ and $D ^ { \prime }$ , respectively. We say that $R$ and $S$ are consistent if $R \ \harpoonright _ { \mathbf { \alpha } } \left( D \cap D ^ { \prime } \right) \ = \ S \ \harpoonright _ { \mathbf { \alpha } } \left( D \cap D ^ { \prime } \right)$ . A family of $K$ -relations $\mathcal { R }$ is locally consistent if all pairs of $K$ -relations from $\mathcal { R }$ are consistent, and globally consistent if there exists a $K$ -relation $R$ such that ${ \mathrm { V a r s } } ( R ) =$ $\textstyle \bigcup _ { S \in { \mathcal { R } } } { \mathrm { V a r s } } ( S )$ and $R \cap \operatorname { V a r s } ( S ) = S$ for all $S \in { \mathcal { R } }$ . Student Teacher Teacher Course Alice Charlie Charlie Math Bob David David CS Course Student Math Alice CS Alice CS Bob Figure 1: Teachers, courses, and students. Definition 1 (Contextual $K$ -families). Let $\mathcal { C }$ be a context set. A contextual $K$ -family over $\mathcal { C }$ is a locally consistent set of $K$ -relations $\mathcal { R }$ containing exactly one $K$ - relation $R$ with domain $C$ , for each $C \in { \mathcal { C } }$ . The $K$ - relation of $\mathcal { R }$ with domain $C$ is referred to as $\mathcal { R } _ { C }$ . The set $\boldsymbol { \mathscr { C } }$ is the context set of $\mathcal { R }$ , denoted $\mathrm { C o n t s } ( \mathcal { R } )$ . The support $\operatorname { S u p p } ( { \mathcal { R } } )$ of $\mathcal { R }$ is defined as $\{ \mathrm { S u p p } ( \mathcal { R } _ { C } ) \mid$ $C \in { \mathcal { C } } \}$ . A contextual $\mathbb { B }$ -family is simply called a contextual family, and identified with its support. Proposition 2. If $\mathcal { R }$ is a contextual $K$ -family, then $\operatorname { S u p p } ( { \mathcal { R } } )$ is a contextual family. If $\mathcal { R }$ is a contextual family and $\alpha \in K$ , we set $\alpha \mathcal { R }$ as $\{ \alpha \mathcal { R } _ { C } \mid C \in \mathcal { C } \}$ . In contrast to the above, if $\mathcal { R }$ is a contextual family, $\alpha \mathcal { R }$ is not necessarily a contextual $K$ - family (it may violate local consistency; cf. Ex. 5). Let $\mathcal { R }$ and $\boldsymbol { S }$ be contextual $K$ -families over a shared context set $\boldsymbol { \mathscr { C } }$ . We set $\mathcal { R } + \mathcal { S }$ as $\{ \mathcal { R } _ { C } + { \cal S } _ { C } \ | \ C \in \mathcal { C } \}$ . Clearly, local consistency is preserved under this operation. Proposition 3. If are $\mathcal { R }$ and $\boldsymbol { S }$ are contextual $K$ - families over a shared context set $\boldsymbol { \mathscr { C } }$ , then $\mathcal { R } + \mathcal { S }$ is $a$ contextual $K$ -family. A contextual family $\mathcal { R }$ is sometimes identified with the set of its assignments $\begin{array} { r } { \bigcup _ { \mathbf { \lambda } , \mathbf { \lambda } } \mathcal { R } = \bigcup _ { C \in \mathcal { C } } \mathcal { R } _ { C } } \end{array}$ . Similarly, a contextual $K$ -family $\boldsymbol { S }$ can be viewed as a single function $\# : \bigcup \mathrm { S u p p } ( S ) K _ { \neq 0 }$ , where $K _ { \neq 0 } = \{ a \in K \mid$ $a \neq 0 \}$ . A contextual family is called $K$ -realisable if it is the support of some contextual $K$ -family. Example 4. The contextual family in Fig. 1 consists of three tables. Note that local consistency holds. In particular, each teacher both supervises students and teaches courses, each student has a supervisor and enrols some course, and each course has students and a teacher. Nevertheless, global consistency does not hold. For this, suppose toward a contradiction that there exists a single relation that projects to the three tables in the figure. Let $s$ be an assignment of this relation mapping (Course, Student) to (CS, Alice). Then s maps (Teacher, Course) to (David, CS), as can be seen from the top-right table. Thus, $s$ maps (Student, Teacher) to (Alice, David). This contradicts the fact that (Alice, David) does not appear in the top-left table. # 3.1 $K$ -realisability of Contextual Families Let us now turn our attention to the $K$ -realisability of contextual families. It is natural to ask whether or not any contextual family is the support of a contextual $K$ - family. This turns out not to be the case for $K = \mathbb { R } _ { \geq 0 }$ . A counterexample—in our terminology, a contextual family over the context set $\{ a b , a b ^ { \prime } , a ^ { \prime } b , a ^ { \prime } b ^ { \prime } \}$ —is given in (Abramsky 2013, Proposition 9.1). Example 4 provides another counterexample, as we argue next. Example 5. Consider the contextual family in Fig. 1. For a contradiction, suppose this contextual family is the support of some contextual $\mathbb { R } _ { \geq 0 }$ -family, represented as a function $\# \colon S \mathbb { R } _ { > 0 }$ , where $S$ is the set of assignments in Fig. 1. By local consistency, $\pi$ extends uniquely to assignments on individual variables. Now $$ \begin{array} { r l } & { \# ( \mathrm { S t u d e n t } \mapsto \mathrm { B o b } ) = \pi ( \mathrm { T e a c h e r } \mapsto \mathrm { D a v i d } ) } \\ & { = \# ( \mathrm { C o u r s e } \mapsto \mathrm { C S } ) } \\ & { = \# ( \mathrm { C o u r s e } , \mathrm { ~ S t u d e n t } \mapsto \mathrm { C S } , \mathrm { ~ A l i c e } ) } \\ & { \quad + \# ( \mathrm { C o u r s e } , \mathrm { ~ S t u d e n t } \mapsto \mathrm { C S } , \mathrm { ~ B o b } ) } \\ & { \quad > \# ( \mathrm { S t u d e n t } \mapsto \mathrm { B o b } ) , } \end{array} $$ where the inequality follows since each value of $\#$ is strictly positive. Thus the contextual family of Fig. 1 cannot be obtained as the support of any contextual $\mathbb { R } _ { \geq 0 }$ -family. Next, we completely characterise which contextual families are supports of some contextual $K$ -families for cancellative monoids $K$ and context sets that form a chordless cycle (defined below). We show that a contextual family $\mathcal { R }$ is the support of some contextual $K$ - family if and only if its associated overlap projection graph has an edge cycle cover; both terms are defined below. For what follows, we treat addition and subtraction of natural numbers $i \in [ n ]$ modulo $n$ . We call a context set $\mathcal { C } = \{ C _ { 0 } , \ldots , C _ { n - 1 } \}$ , where $n \geq 3$ , a chordless-cycle context set if for all $i , j \in [ n ]$ , the contexts $C _ { i }$ and $C _ { j }$ intersect if and only if $i = j \pm 1$ . For the remainder of this section, if the context $\mathcal { C } = \{ C _ { 0 } , \ldots , C _ { n - 1 } \}$ is implicitly understood, we write $S _ { i }$ as a shorthand for $S _ { C _ { i } }$ , given a contextual $K$ -family $\boldsymbol { S }$ over $\boldsymbol { \mathscr { C } }$ . Let $\mathcal { R }$ be a contextual family over a chordless-cycle context set $\mathcal { C } = \{ C _ { 0 } , \ldots , C _ { n - 1 } \}$ . The overlap projection graph of $\mathcal { R }$ , denoted $\mathrm { O P G r a p h } ( { \mathcal { R } } )$ , is the digraph s.t.: $\bullet$ The vertices consists of the restrictions $s \ u { J } \left( C _ { i } \cap C _ { i + 1 } \right)$ , for each $s \in \mathcal { R } _ { i }$ , $i \in [ n ]$ , and • There is a directed edge from $s \uparrow ( C _ { i } \cap C _ { i - 1 } )$ to $s \harpoonright$ $( C _ { i } \cap C _ { i + 1 } )$ , for each $s \in \mathcal { R } _ { i }$ , $i \in [ n ]$ . An edge from a vertex $u$ to a vertex $v$ that is generated by an assignment $s$ is denoted by $u \xrightarrow { s } v$ . A digraph $G = ( V , E )$ is called cyclically $n$ -partite if there exists a partition $\{ V _ { 0 } , \ldots , V _ { n - 1 } \}$ of $V$ such that for every edge $( u , v ) \in E$ we have $u \in V _ { i }$ and $v \in V _ { i + 1 }$ . The following lemma is a simple observation. Lemma 6. Let $\mathcal { R }$ be $a$ contextual family over a chordless-cycle context set $\mathcal { C } = \{ C _ { 0 } , \ldots , C _ { n - 1 } \}$ . Then OPGraph( $\mathcal { R }$ ) is cyclically $n$ -partite. A digraph $G$ has an edge cycle cover if every edge $( u , v ) \in E$ belongs to some cycle. We will characterise Figure 2: Example of an Overlap Projection Graph $K$ -realisable contextual families with respect to chordless cycles using this notion. The next example showcases the concepts introduced so far. Example 7. Consider again the contextual family in Figure 1. The set $\mathcal { C } ~ = ~ \{ C _ { 1 } , C _ { 2 } , C _ { 3 } \}$ , where $C _ { 1 } \ =$ $\{ \mathtt { S t u d e n t } , \mathtt { T e a c h e r } \}$ , $C _ { 2 } ~ = ~ \{ \mathrm { T e a c h e r , C o u r s e } \}$ , and $C _ { 3 } = \{ \mathsf { C o u r s e } , \mathsf { S t u d e n t } \}$ , is a chordless-cycle context set. The corresponding overlap projection graph, depicted in Figure 2, has 6 vertices (one for each value) and 7 edges (one for each assignment). In the figure, we represent a vertex, that is, a restriction $s \mid \{ x \}$ , simply as the unique value $s ( x )$ it corresponds to. We note that the graph is cyclically 3-partite, as we can partition the vertices to students, teachers, and courses with edges between them. Moreover, the vertices Alice, Charlie, and Math form a cycle, and similarly Bob, David, and CS form a cycle. Each edge belongs to some cycle, except for the edge from CS to Alice. The graph thus has no edge cycle cover. To characterise $K$ -realisable contextual families over chordless cycles, we need the following lemma, which states that every simple cycle gives rise to a uniformly weighted $K$ -relation. Recall that a cycle $( v _ { 1 } , \ldots , v _ { \ell } , v _ { 1 } )$ in a graph is called simple if does not visit any vertex twice. Let us then call a contextual family $\mathcal { R }$ over a chordless-cycle context set simply cyclic if OPGraph $( { \mathcal { R } } )$ contains a simple cycle $$ v _ { 1 } \stackrel { s _ { 1 } } { \longrightarrow } v _ { 2 } \stackrel { s _ { 2 } } { \longrightarrow } \ldots \stackrel { s _ { \ell - 2 } } { \longrightarrow } v _ { \ell - 1 } \stackrel { s _ { \ell - 1 } } { \longrightarrow } v _ { \ell } \stackrel { s _ { \ell } } { \longrightarrow } v _ { 1 } , $$ where $\bigcup \mathcal { R } = \{ s _ { 1 } , . . . , s _ { \ell } \}$ . The next lemma states that any simply cyclic family can be realised as a $K$ -family using uniform multiplicities. Lemma 8. Let $\mathcal { R }$ be a simply cyclic contextual family over a chordless-cycle context set $\mathcal { C } = \{ C _ { 0 } , \ldots , C _ { n - 1 } \}$ . Let $K$ be a positive commutative monoid and $0 ~ \neq$ $w \in \ K$ . Then $w R$ is a contextual $K$ -family, and $\operatorname { S u p p } ( w \mathcal { R } ) = \mathcal { R }$ . Proof. We need to prove that $w R$ is locally consistent. Suppose the simple cycle has length $\ell$ , as in (1). Since OPGraph $( { \mathcal { R } } )$ is cyclically $n$ -partite (Lemma 6), $\ell$ must be a multiple of $n$ ; that is, $n \cdot m = \ell$ for some positive integer $m$ . Fix $i \in [ n ]$ . One observes that $$ | { \mathcal { R } } _ { i } \cap ( C _ { i - 1 } \cap C _ { i } ) | = | { \mathcal { R } } _ { i } | = | { \mathcal { R } } _ { i } \cap ( C _ { i } \cap C _ { i + 1 } ) | = m . $$ Moreover, each element from $\mathcal { R } _ { i } \mid ( C _ { i } \cap C _ { i + 1 } )$ appears in $\mathcal { R } _ { i + 1 } \mid ( C _ { i } \cap C _ { i + 1 } )$ , and conversely each element of $\mathcal { R } _ { i + 1 } \lceil$ $( C _ { i } \cap C _ { i + 1 } )$ appears in $\mathcal { R } _ { i } \mid ( C _ { i } \cap C _ { i + 1 } )$ . We conclude that $( w \mathcal { R } ) _ { i } \uparrow ( C _ { i } \cap C _ { i + 1 } ) = ( w \mathcal { R } ) _ { i + 1 } \uparrow ( C _ { i } \cap C _ { i + 1 } )$ . If $i$ and $j$ are not adjacent, then $C _ { i } \cap C _ { j } = \varnothing$ . For this case it suffices to note that $( w \mathcal { R } ) _ { i } \mid \emptyset$ consists of the empty assignment associated with the sum $w + \cdots + w$ of length $m$ . Thus $w R$ is a contextual $K$ -family. It follows by positivity of $K$ that $\operatorname { S u p p } ( w \mathcal { R } ) = \mathcal { R }$ . □ Theorem 9. Let $K$ be a positive commutative cancellative monoid and $\mathcal { R }$ a contextual family over a chordlesscycle context set $\mathcal { C } = \{ C _ { 0 } , \ldots , C _ { n - 1 } \}$ . Then, $\mathcal { R }$ is the support of some contextual $K$ -family if and only if its overlap projection graph has an edge cycle cover. Proof. ( $\Longrightarrow$ ) Let $\mathcal { R }$ be the support of some contextual $K$ -family $\boldsymbol { S }$ . Then $\boldsymbol { S }$ can be represented as $\# \colon \bigcup \mathcal R $ $K _ { \neq 0 }$ , a mapping which we extend to sets $S$ by $\# ( S ) : =$ $\textstyle \sum _ { s \in S } \# ( s )$ . Toward a contradiction, assume there is no edge cycle cover for OPGraph $( { \mathcal { R } } )$ . Then, some $s \in \bigcup \mathcal { R }$ gives rise to an edge that is not in any cycle. Without loss of generality, we assume that $s$ is associated with the context $C _ { 0 }$ , that is, it belongs to $\mathcal { R } _ { 0 }$ . We define sets of assignments $L ^ { j }$ , $j \geq 0$ , inductively as follows: • $L _ { 0 } : = \{ s ^ { \prime } \in \mathcal { R } _ { 1 } \mid s \mid \left( C _ { 0 } \cap C _ { 1 } \right) = s ^ { \prime } \mid \left( C _ { 0 } \cap C _ { 1 } \right) \}$ , • $L ^ { j + 1 }$ extends $L ^ { j }$ with assignments $s ^ { \prime } \in \mathcal { R } _ { j + 1 }$ , $j \in [ n ]$ , such that there is $s \in L _ { i } \cap \mathcal { R } _ { j }$ with $$ s \ u \mid ( C _ { j } \cap C _ { j + 1 } ) = s ^ { \prime } \ u \mid ( C _ { j } \cap C _ { j + 1 } ) . $$ Now, let $M : = \textstyle \bigcup _ { j = 1 } ^ { \infty } L ^ { j }$ . Since $\mathcal { R }$ is finite, there exists a positive integer $k$ such that $M = L ^ { k }$ . Furthermore, we write $M _ { i } : = \mathcal { R } _ { i } \cap M$ , for $i \in [ n ]$ . That is, $M _ { i }$ consists of the assignments of $M$ over the context $C _ { i }$ . We claim that the following inequalities hold: $\# ( M _ { 0 } \cup \{ s \} ) \leq \# ( M _ { 1 } ) \leq \ldots \leq \# ( M _ { n - 1 } ) \leq \# ( M _ { 0 } ) ,$ (2) where $\leq$ refers to the natural (pre)order of $K$ . Note that if this claim holds, we reach a contradiction. Indeed, note that $s \notin L _ { 0 }$ by hypothesis, hence the transitivity of $\leq$ implies $\# ( M _ { 0 } ) + \# ( \{ s \} ) = \# ( M _ { 0 } \cup \{ s \} ) \leq \# ( M _ { 0 } )$ . Thus, there is $c \in K$ such that $\# ( M _ { 0 } ) + \# ( \{ s \} ) + c =$ $\# ( M _ { 0 } )$ . By cancellativity, $\# ( \{ s \} ) + c = 0$ , and hence by positivity, $\# ( s ) = 0$ . This contradicts the fact that $s \in \bigcup \mathcal { R }$ , where $\#$ is a function from $\cup \mathcal { R }$ to $K _ { \neq 0 }$ . Thus it remains to prove (2). First, define $$ \begin{array} { r l } & { \mathcal { R } _ { i , + } = \{ s ^ { \prime } \in \mathcal { R } _ { i } \mid s ^ { \prime } \mid ( C _ { i } \cap C _ { i + 1 } ) \in M _ { i } \mid ( C _ { i } \cap C _ { i + 1 } ) \} , } \\ & { \mathcal { R } _ { i + 1 , - } = \{ s ^ { \prime } \in \mathcal { R } _ { i + 1 } \mid s ^ { \prime } \in ( C _ { i } \cap C _ { i + 1 } ) \in M _ { i } \mid } \\ & { ( C _ { i } \cap C _ { i + 1 } ) \} . } \end{array} $$ Then we observe that, for $i \in \{ 1 , \ldots , n - 1 \}$ , 1. $M _ { i } \subseteq \mathcal { R } _ { i , + }$ , 2. # $( \mathcal { R } _ { i , + } ) = \# ( \mathcal { R } _ { i + 1 , - } )$ , and 3. $\mathcal { R } _ { i + 1 , - } = M _ { i + 1 }$ . Item 1 is immediate, item 2 follows by local consistency, and item 3 follows by the construction of $M$ . Since $\leq$ is transitive and reflexive, we obtain $\# ( M _ { i } ) \leq \# ( M _ { i + 1 } )$ . Also, for the first inequality in (2), we obtain $M _ { 0 } \cup$ $\{ s \} \subseteq \mathcal { R } _ { 0 , + } ^ { \prime }$ , $\# ( \mathcal { R } _ { 0 , + } ^ { \prime } ) = \# ( \mathcal { R } _ { 1 , - } ^ { \prime } )$ , $\mathcal { R } _ { 1 , - } ^ { \prime } = M _ { 1 }$ , where $\mathcal { R } _ { 0 , + } ^ { \prime } , \mathcal { R } _ { 1 , - } ^ { \prime }$ are defined otherwise as $\mathcal { R } _ { 0 , + } , \mathcal { R } _ { 1 , - }$ , except that $M _ { 0 }$ is replaced with $M _ { 0 } \cup \{ s \}$ . This concludes the proof of the $\ " \Rightarrow$ ” direction ( $\Leftarrow$ ) Suppose OPGraph $( { \mathcal { R } } )$ has an edge cycle cover. Then, each assignment $s \in \bigcup \mathrm { S u p p } ( \mathcal { R } )$ belongs to a cycle in OPGraph $( { \mathcal { R } } )$ . In particular, there must be a simply cyclic contextual family $\mathcal { T } _ { s }$ such that $s \in$ $\bigcup \mathrm { S u p p } ( \mathcal { T } _ { s } ) \subseteq \bigcup \mathrm { S u p p } ( \mathcal { R } )$ . Then, Lemma 8 entails $w T _ { s }$ is a contextual $K$ -family, for any $w \ne 0$ (which exists since $K$ is non-trivial). Since contextual $K$ -families over $\boldsymbol { \mathscr { C } }$ are closed under addition (Prop. 3), the sum $\begin{array} { r } { \sum _ { s \in \bigcup \mathrm { S u p p } ( \mathcal { R } ) } w \mathcal { T } _ { s } } \end{array}$ is a contextual $K$ -family. Its support is $\mathcal { R }$ , for $K$ is positive. Hence the “ $\Leftarrow$ ” follows. □ Consider our running example in light of this theorem: Example 10. Example 7 and Thm. 9 imply that the contextual family in Figure 1 is not $K$ -realisable, for any cancellative monoid $K$ . Since $\mathbb { R } _ { \geq 0 }$ and $\mathbb { N }$ are cancellative, the family in question does not arise as the support of any locally consistent family of distributions or multisets. However, after adding (Course, Student) $\mapsto$ (Math, Bob), the extended overlap projection graph has an edge cycle cover. In this case, Theorem 9 implies that the obtained contextual family is $K$ -realisable. In the case of distributions and multisets, we can introduce uniform weights to each table (in such a way that the total weights of the tables match one another). # 3.2 Contextual $\mathbb { R } _ { \geq 0 }$ -realisability Let us next briefly consider the special case of $\mathbb { R } _ { \geq 0 ^ { - } }$ realisability. As a consequence of the previous section, we observe that a contextual $\mathbb { R } _ { \geq 0 }$ -family decomposes to its cycles. Moreover, we establish that a contextual family is the support of some contextual $\mathbb { R } _ { \geq 0 }$ -family if and only if it is the support of some contextual N-family. Corollary 11. Each contextual $\mathbb { R } _ { \geq 0 }$ -family over a chordless-cycle context set is a non-negative combination of its cycles. Proof. Let $\mathcal { R }$ be a contextual $\mathbb { R } _ { \geq 0 }$ -family, with a multiplicity mapping $\# \colon \bigcup \mathrm { S u p p } ( \mathscr { R } ) \mathbb { R } _ { > 0 }$ . Assume $\mathcal { R }$ is non-empty; otherwise the claim holds trivially. Theorem 9 entails that there is a non-empty set of assignments $S \subseteq \cup \mathrm { S u p p } ( \mathcal { R } )$ forming a simple cycle in OPGraph( $\mathcal { R } _ { . }$ ). Let $w = \operatorname* { m i n } \{ \# ( s ) ~ | ~ s \in S \}$ . Define the updated family $\mathcal { R } ^ { \prime } : = \mathcal { R } - w S$ , where subtraction is defined in the obvious way. Then, $\mathcal { R } ^ { \prime }$ is a contextual $\mathbb { R } _ { \geq 0 }$ -family. We repeat this process recursively, setting $\mathcal { R } : = \mathcal { R } - w S$ at each step, until the family becomes empty. Since $S$ is removed entirely at each step (i.e., it no longer appears in the support), no simple cycle $S$ is used more than once, meaning that the process will terminate. This leads to a decomposition of the form ${ \mathcal { R } } = w _ { 1 } S _ { 1 } + \cdot \cdot \cdot + w _ { \ell } S _ { \ell }$ . □ This concludes our discussion of context sets that form chordless cycles. A natural follow-up question is whether a similar graph-theoretic characterisation a b b c c a a b′ b′ c 0 0 0 0 0 1 0 0 0 0 1 1 0 1 1 0 1 1 1 1 1 1 exists for $\mathbb { R } _ { \geq 0 }$ -realisability over arbitrary context sets. The next example suggests that the answer is negative. We now write $x _ { 1 } \ldots x _ { n } \mapsto a _ { 1 } \ldots a _ { n }$ for the assignment that maps $x _ { 1 } , \ldots , x _ { n }$ respectively to $a _ { 1 } , \ldots , a _ { n }$ . Example 12. Consider the contextual family represented in Figure 3. We note that restricted to $\{ a b , b c , c a \}$ the corresponding overlap projection graph has an edge cover, and thus the restriction is $\mathbb { R } _ { \geq 0 } .$ realisable by Theorem 9. Similar reasoning applies to the restriction on $\{ a b ^ { \prime } , b ^ { \prime } c , c a \}$ . However, over the full context set, $\mathbb { R } _ { \geq 0 }$ -realisability does not hold anymore. Assume toward a contradiction that this is not the case. Let $S$ be the set of assignments represented in Figure 3. Then, Proposition 13 yields a mapping $\pi \colon S \ \ \mathbb { N }$ which forms a contextual N-relation and is such that $\pi ( b c \mapsto 0 1 ) = m \geq 1$ . By local consistency, $\pi ( c a \enspace \mapsto \enspace 1 0 ) \enspace = \enspace n \enspace \geq \enspace m$ , and furthermore $\pi ( c a \mapsto 1 0 ) = \pi ( a b ^ { \prime } \mapsto 0 0 ) = \pi ( b ^ { \prime } c \mapsto 0 0 ) = \pi ( c a \mapsto$ $0 1 ) = \pi ( a b \mapsto 1 1 ) = \pi ( b c \mapsto 1 1 )$ . Thus $$ \pi ( b c \mapsto 0 1 ) + \pi ( b c \mapsto 1 1 ) = n + m \neq n = \pi ( c a \mapsto 1 0 ) $$ violating local consistency. Nevertheless, we can show that $\mathbb { R } _ { \geq 0 }$ -realisability coincides with N-realisability. Proposition 13. A contextual family $\mathcal { R }$ is the support of some contextual $\mathbb { R } _ { \geq 0 }$ -family if and only if it is the support of some contextual N-family. Proof. It suffices to prove the “only-if” direction. For each contextual family $\mathcal { R }$ , there exists a system of linear inequalities $\mathbf { A } \mathbf { x } \geq \mathbf { b }$ (with rational coefficients) that has a (real) solution $\mathbf { x }$ if and only if $\mathcal { R }$ is the support of some contextual $\mathbb { R } _ { \geq 0 }$ -family. Namely, the variables in $\mathbf { x }$ represent the “weights” of the assignments in $\mathcal { R }$ , and the inequalities capture the local consistency criterion and the non-negativity of these “weights”. Furthermore, it is well-known that if a system of the described form has a real solution, then it has a rational solution (Schrijver 1999). Hence $S$ must be the support of some contextual $\mathbb { Q } _ { \geq 0 }$ -relation $\mathcal { R }$ , which can be transformed into a contextual N-relation by multiplying the weight of each assignment by a sufficiently large constant. # 4 Entailment of Dependence Next, we focus on the entailment problem of functional dependence on contextual $K$ -families. We first summarise the well-known Armstrong’s axioms for functional dependence in the classical setting (Armstrong 1974). We then develop the theory in the contextual setting. In particular, we give a sound and complete axiomatisation for unary dependencies and establish that derivability can be decided in polynomial time. # 4.1 Classical setting A functional dependency (FD) is a statement of the form $x y$ , where $x , y \subseteq D$ are finite sets of variables. An FD $x y$ is satisfied in a relation $X$ (written $X \models$ $x y$ ) if for all $s , s ^ { \prime } \in X$ : $s ( x ) = s ^ { \prime } ( x )$ entails $s ( y ) =$ $s ^ { \prime } ( y )$ . For a set $\Sigma$ of FDs, we write $\Sigma \models x \to y$ if for all relations $X$ : if $X \models \phi$ for all $\phi \in \Sigma$ , then $X \models x y$ . Armstrong (1974) established that the entailment $\Sigma \models x y$ is completely axiomatised by the following set of three axioms, termed Armstrong’s axioms: • Reflexivity. $x \to y , \quad { \mathrm { f o r ~ } } y \subseteq x .$ . • Augmentation. If $x y$ , then for any $z$ , $x z y z .$ $\bullet$ Transitivity. If $x y$ and $y z$ , then $x z .$ # 4.2 FDs in the Contextual Setting For a $K$ -relation $R$ and an FD $x y$ , it is natural to set that $R \models x \to y$ , if $\operatorname { S u p p } ( R ) \models x \to y$ . For a functional dependency $\phi$ , we write $\mathrm { V a r s } ( \phi )$ for the set of variables occurring in $\phi$ . Given a contextual $K$ -family $\mathcal { R }$ over a context set $\boldsymbol { \mathcal { C } }$ and an FD $\phi$ such that ${ \mathrm { V a r s } } ( \phi ) \in { \mathcal { C } }$ , we say that $\mathcal { R }$ satisfies $\phi$ , written ${ \mathcal { R } } \models \phi$ , if $\mathcal { R } _ { \mathrm { V a r s } ( \phi ) } \models \phi$ . Note that local consistency guarantees that ${ \mathcal { R } } \models \phi$ iff ${ \mathcal { R } } _ { C } \models \phi$ , for any (all, resp.) context $C$ such that ${ \mathrm { V a r s } } ( \phi ) \subseteq C$ . A finite set of FDs $\Sigma$ entails an individual FD $\phi$ over $K$ (written $\Sigma \vdash _ { K } \phi$ ) if for every contextual $K$ -family $\mathcal { R }$ over $\boldsymbol { \mathscr { C } }$ , such that $\operatorname { V a r s } ( \theta ) \in { \mathcal { C } }$ for all $\theta \in \Sigma \cup \{ \phi \}$ , $\mathcal { R } \models \Sigma$ implies ${ \mathcal { R } } \models \phi$ . If $K = \mathbb { B }$ , we sometimes write $\Sigma \ \ v { e } = \phi$ and say that $\Sigma$ entails $\phi$ (without explicitly mentioning $\mathbb { B }$ ). It is not difficult to see that reflexivity and augmentation remain sound for entailment in the contextual setting. Transitivity, in turn, is not sound. For this, consider the contextual family depicted in Fig. 1. It satisfies Student $$ Teacher and Teacher $$ Course but not Course $$ Student. Furthermore, a contextual $\mathbb { R } _ { \geq 0 }$ -family that is a counterexample for transitivity is obtained by first extending the bottom table of Fig. 1 with (Course, Student) $\mapsto$ (Math, Bob), and then enriching each table with the uniform probability distribution. It is not difficult to see that the following weaker form of the transitivity rule, termed contextual transitivity, is sound in the contextual setting. Contextual transitivity. If $x y$ , $y z$ and $x y z x y z$ , then $x \to z$ . Whether there exist sound and complete axioms, or whether (or when) $\mathbb { B }$ -entailment, $\mathbb { R } _ { \geq 0 }$ -entailment and $K$ -entailment of FDs generally coincide, are left as open questions. We can however note the following simple property, which follows since every support of a contextual $K$ -family is a contextual family, and since ${ \mathcal { R } } \mid = \phi \Leftrightarrow \operatorname { S u p p } ( { \mathcal { R } } ) \mid = \phi$ holds for all contextual $K$ - families $\mathcal { R }$ and FDs $\phi$ . Proposition 14. $\Sigma \models \phi$ implies $\Sigma \vdash _ { K } \phi$ , for any set of FDs $\Sigma \cup \{ \phi \}$ and positive commutative monoid $K$ . # 4.3 Unary FDs with Binary Contexts Leaving aside general FDs, let us restrict attention to unary FDs. An FD $x y$ is called unary if both $x$ and $y$ are individual variables. Additionally, FDs of the form $x x$ are called context dependencies or CDs. A CD $x x$ , where $x$ consists of two variables, is said to be binary. A CD $x x$ is thus a kind of a dummy FD which only states that $x$ is a part of the context set. For unary FDs with binary CDs, the converse of Proposition 14 holds. The result follows as a side product of our main technical result: a simple sound and complete axiomatisation for the entailment for unary FDs with binary CDs. We only need to append reflexivity with one extra rule, stating that every cycle of unary functional dependencies can be inverted. This property, termed the cycle rule, is presented below. We write $x _ { 1 } \to x _ { 2 } \to \cdot \cdot \cdot \to x _ { n - 1 } \to x _ { n }$ as a shorthand for $x _ { 1 } \to x _ { 2 } , x _ { 2 } \to x _ { 3 } , . . . , x _ { n - 1 } \to x _ { n } .$ Cycle rule. For $k \geq 1$ , if $$ x _ { 1 } \to x _ { 2 } \to \cdot \cdot \cdot \to x _ { k } \to x _ { 1 } , $$ then $x _ { 1 } \to x _ { k }$ . Its soundness proof relies on the assumption of local consistency, as we shall shortly see. We note that the notion of a formal proof has a subtle feature; the intermediate derivations should not introduce new contexts. Otherwise, for instance, it would be possible to derive $x z$ from $x y$ and $y z$ using reflexivity (to obtain $x y z x y z$ ) and contextual transitivity (to obtain $x z$ ). In contrast, the question of logical entailment of $x z$ by $x y$ and $y z$ has been formulated over contextual families having contexts $x y , y z , x z$ but not necessarily the context $x y z$ . Definition 15 (Derivation). Given a set of inference rules $s$ , and a set of FDs $\Sigma \cup \{ \phi \}$ , we say that $\phi$ is derivable from $\Sigma$ by $\boldsymbol { S }$ , written $\Sigma \vdash _ { S } \phi$ , if there is a finite sequence of FDs $\psi _ { 0 } , . . . , \psi _ { n - 1 }$ such that 1. $\psi _ { n } = \phi$ 2. for $i \in [ n ]$ , $\psi _ { i }$ is either from $\Sigma$ or obtained from $\{ \psi _ { 0 } , . . . , \psi _ { i - 1 } \}$ by using one of the rules of $\boldsymbol { S }$ . 3. for $i \in [ n ]$ , $\operatorname { V a r s } ( \psi _ { i } ) \subseteq \operatorname { V a r s } ( \theta )$ , for some $\theta \in \Sigma \cup \{ \phi \}$ . Specifically, Item 3 is the new criterion introduced for the contextual setting. As usual, we call $\boldsymbol { S }$ complete if $\Sigma \models \phi$ entails $\Sigma \vdash _ { S } \phi$ , and sound if $\Sigma \vdash _ { S } \phi$ entails $\Sigma \models$ $x _ { 1 }$ x2 xk−1 $x _ { k }$ $x _ { k }$ x1 b1,1 $a _ { 2 , 1 }$ $_ { a _ { n - 1 , 1 } }$ $a _ { n , 1 }$ $\scriptstyle a _ { n , 1 }$ a1,1 b1,2 a2,2 an−1,2 an,2 an,2 a1,1 b1,3 a2,3 an−1,3 an,3 an,3 a1,2 . . . . . . $\phi$ . By $\mathit { c } \mathcal { R }$ we refer to the set consisting of reflexivity and the cycle rule for each $k \geq 1$ . Lemma 16. Let $K$ be a positive commutative monoid. The cycle rule is sound for contextual $K$ -families, i.e., $x _ { 1 } \to x _ { 2 } \to \cdot \cdot \cdot \to x _ { k } \to x _ { 1 } \left| = _ { K } \right. x _ { 1 } \to x _ { k }$ , for all $k \geq 1$ . Proof. We prove the case $K = \mathbb { B }$ ; the general case then follows from Proposition 14. Assuming the rule is not sound, we construct a contradiction with the assumption that the contextual family is finite. The contradiction is depicted in Figure 4. Let $\mathcal { R }$ be a contextual family that satisfies $x _ { 1 } \to x _ { 2 } \to \cdot \cdot \cdot \to x _ { k } \to x _ { 1 }$ but does not satisfy $x _ { 1 } ~ ~ x _ { k }$ . Now, for a relation over two variables $x _ { i } , x _ { j }$ , let us write $\left( a _ { i , \ell } , a _ { j , \ell ^ { \prime } } \right)$ to denote the assignment $\left( { { x } _ { i } } , { { x } _ { j } } \right) \mapsto \left( { { a } _ { i , \ell } } , { { a } _ { j , \ell ^ { \prime } } } \right)$ . By hypothesis, $\mathcal { R } _ { x _ { 1 } , x _ { k } }$ contains two assignments $( a _ { 1 , 1 } , a _ { n , 1 } )$ and $( a _ { 1 , 1 } , a _ { n , 2 } )$ , where $a _ { n , 1 } \neq a _ { n , 2 }$ . By local consistency and the FD $x _ { k - 1 } \to x _ { k }$ , we observe $\mathcal { R } _ { x _ { k - 1 } , x _ { k } }$ then contains two assignments $( a _ { n - 1 , 1 } , a _ { n , 1 } )$ and $( a _ { n - 1 , 1 } , a _ { n , 2 } )$ , where $a _ { n - 1 , 1 } \neq a _ { n - 1 , 2 }$ . By induction we obtain that $\mathcal { R } _ { x _ { 1 } , x _ { 2 } }$ contains also two assignments of the form $( b _ { 1 , 1 } , a _ { 2 , 1 } )$ and $\left( { { b _ { 1 , 2 } } , { a _ { 2 , 2 } } } \right)$ , where $b _ { 1 , 1 } \neq b _ { 1 , 2 }$ and $a _ { 2 , 1 } \neq a _ { 2 , 2 }$ . Then, choosing $a _ { 1 , 2 } \in \{ b _ { 1 , 1 } , b _ { 1 , 2 } \} \setminus \{ a _ { 1 , 1 } \}$ , local consistency and the FD $x _ { k } \to \ x _ { 1 }$ entails that $\mathcal { R } _ { x _ { 1 } , x _ { k } }$ contains an assignment of the form $( a _ { n , 3 } , a _ { 1 , 2 } )$ , where $a _ { n , 3 } ~ \notin ~ \{ a _ { n , 1 } , a _ { n , 2 } \}$ . Now similarly to the above, by induction $\mathcal { R } _ { x _ { 1 } , x _ { 2 } }$ contains an assignment of the form $\left( { { b _ { 1 , 3 } } , { a _ { 2 , 3 } } } \right)$ , where $b _ { 1 , 3 } \notin \{ b _ { 1 , 1 } , b _ { 1 , 2 } \}$ . In particular, the elements $b _ { 1 , 1 } , b _ { 1 , 2 } , b _ { 1 , 3 }$ are distinct. Then, we choose $a _ { 1 , 3 } \in \{ b _ { 1 , 1 } , b _ { 1 , 2 } , b _ { 1 , 3 } \} \setminus \{ a _ { 1 , 1 } , a _ { 1 , 2 } \}$ , and continue as before. Thus, we see that $\mathcal { R }$ must contain relations whose supports are infinite, which contradicts the definition. Therefore we conclude that the rule is sound. □ Notice that the previous proof constructs a counterexample showing that the cycle rule is not sound if relations with infinite support are allowed. Restricting to unary FDs and binary CDs, we next show that the cycle rule together with reflexivity forms an infinite sound and complete proof system. Theorem 17. For every positive commutative monoid $K$ , $\Sigma \vdash _ { K } \phi$ iff $\Sigma \vdash _ { C \mathcal { R } } \phi$ , assuming $\Sigma \cup \{ \phi \}$ is a set of unary FDs and binary CDs. Specifically, the cycle rule and reflexivity are sound and complete for unary FDs and binary CDs. Proof. We prove the case $K \ = \ \mathbb { B }$ ; the proof of the general case is identical. The only change required for the general case is that all relation constructions have to be replaced with uniformly annotated $K$ -relations, that is every assignment is annotated with the same monoid value in $K$ . Also, note that $\mathcal { R } \mid = x y$ iff $\operatorname { S u p p } ( \mathcal { R } ) \mid = x y$ holds by definition. First note that, if $\phi$ is a CD, then both $\Sigma \models \phi$ and $\Sigma \vdash _ { C \mathcal { R } } \phi$ trivially hold. Thus we assume that $\phi$ is a unary FD. Let us consider soundness first. Suppose $( \psi _ { 1 } , \ldots , \psi _ { n } )$ is a proof of $\psi _ { n }$ from $\Sigma$ , and let $\mathcal { R }$ be a contextual family over $\boldsymbol { \mathscr { C } }$ such that $\mathcal { R } \models \Sigma$ , and $\operatorname { V a r s } ( \theta ) \in { \mathcal { C } }$ for all $\theta \in \Sigma \cup \{ \psi _ { n } \}$ . We need to show that $\mathcal { R } \Vdash \psi _ { n }$ . As the induction hypothesis, suppose $\mathcal { R } \Vdash \psi _ { i }$ for all $i < j$ , where $j \le n$ . Suppose also $\psi _ { j }$ is obtained from $\psi _ { \ell _ { 1 } } , \ldots , \psi _ { \ell _ { n } }$ , where $\ell _ { 1 } < \cdot \cdot \cdot < \ell _ { n } < j$ , by using the cycle rule. In particular, the induction hypothesis entails $\mathcal { R }$ satisfies all of $\psi _ { \ell _ { 1 } } , \ldots , \psi _ { \ell _ { n } }$ and their contexts are in $\boldsymbol { \mathscr { C } }$ (due to Item 3 of Definition 15). Hence $\mathrm { V a r s } ( \psi _ { j } ) \in \mathcal { C }$ , due to the form of the cycle rule. From Lemma 16 it now follows that ${ \mathcal { R } } \models \psi _ { j }$ , for $\mathcal { R }$ contains all the necessary contexts. The case for reflexivity is immediate by Item 3 of Definition 15, concluding the induction proof. For completeness, we prove the contraposition: $\Sigma \nvDash$ $x y$ if $\Sigma \vdash x y$ . Assume $\Sigma \vdash x y$ . By reflexivity, $x$ and $y$ are different variables. Let $V$ be the set of variables that appear in $\Sigma \cup \{ x , y \}$ . Consider the directed graph $G$ with node set $V$ and edges consisting of the functional dependencies $u v$ appearing in $\Sigma$ . There are two cases: Case 1. There is no path from $x$ to $y$ in $G$ . Let $X$ be the set of variables to which there is a path from $x$ , including $x$ itself. We create one relation $R$ satisfying $\Sigma$ and falsifying $x y$ . This relation has two assignments: $s$ that maps each variable from $V$ to $0$ , and $s ^ { \prime }$ that maps each variable from $X$ to $0$ and the remaining variables from $V \setminus X$ to $^ { 1 }$ . Since $x \in X$ and $y \in V \setminus X$ , we obtain $R \not \in x y$ . For every $u v \in \Sigma$ , we have $R \vDash$ $u v$ , otherwise $u \in X$ and $v \in V \setminus X$ , contradicting the definition of $X$ . By taking the projections of $R$ on all subsets of $V$ , we obtain a contextual family $\mathcal { R }$ witnessing $\Sigma \models x y$ . Case 2. There is a path from $x$ to $y$ in $G$ . We construct a counterexample contextual family $\mathcal { R }$ as follows. For each $u v$ or $u v \ \to \ u v$ appearing in $\Sigma$ such that $\{ u , v \} \ne \{ x , y \}$ , construct a relation that has two assignments mapping $( u , v )$ respectively to $( 0 , 0 )$ and $( 1 , 1 )$ . For $x y$ , construct a relation that has four assignments mapping $( x , y )$ respectively to $( 0 , 0 ) , ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 1 )$ . The local consistency criterion is satisfied, since each variable has values $0$ and $1$ in every relation in which it appears. Let $\mathcal { R }$ denote the contextual family that arises from the aforementioned relations. Clearly $\mathcal { R } \not \in \boldsymbol { x } y$ , but $\mathcal { R } \models u v$ , for each $u v \in \Sigma$ such that $\{ u , v \} \neq \{ x , y \}$ . It remains to consider the case of $u v \in \Sigma$ , where $\{ u , v \} = \{ x , y \}$ . Clearly, it cannot be that $u v = x y$ , since this would contradict the assumption that $\Sigma \vdash x y$ . Moreover, if $u v = y x$ , then the path from $x$ to $y$ and the cycle rule entails $\Sigma \vdash x y$ , contradicting our assumption. Hence we conclude that all FDs in $\Sigma$ are satisfied. The resulting contextual family $\mathcal { R }$ thus provides a proof for $\Sigma \models x y$ , concluding the second case and the proof. In the general case of $K \neq \mathbb { B }$ , using positivity and non-triviality of $K$ we can select two nonzero monoid values $a , b \in K$ such that $a + a = b$ . Then assignments in the above defined ( $K$ -)relations of cardinality 2 shall be annotated with $b$ , while assignments in the ( $K$ -)relation of cardinality 4 shall be annotated with $a$ . It is easy to see that with this amendment the proof goes through also in the general case. □ As the same axioms characterise entailment of unary FDs with respect to both $\mathbb { B }$ and $K$ , we obtain the converse of Proposition 14 as a direct corollary. Corollary 18. Let $K$ be any positive commutative monoid. Now, $\Sigma \models \phi$ if and only if $\Sigma \vdash _ { K } \phi$ , for any set of unary FDs and binary CDs $\Sigma \cup \{ \phi \}$ . # 4.4 Unary FDs with Ternary Contexts For unary FDs with ternary CDs, the interaction is significantly more intricate. Let us now abbreviate CDs by writing $x$ instead of $x x$ . We have already observed that transitivity can fail: from $x _ { 1 } x _ { 2 }$ and $x _ { 2 } \to x _ { 3 }$ , it does not necessarily follow that $x _ { 1 } x _ { 3 }$ . However, if we additionally assume $c x _ { 3 }$ , $c x _ { 1 } x _ { 2 }$ , $c x _ { 2 } x _ { 3 }$ , and $c x _ { 1 } x _ { 3 }$ , then $x _ { 1 } \to x _ { 3 }$ does follow. The next rule captures this non-trivial interaction between unary FDs and ternary CDs (the above case is its instance with $n = 3$ and ${ { c } _ { i } } \mathrm { ~ = ~ } c$ for all $i \in [ n ]$ ). Note that even the soundness of the rule is non-trivial. # Contextual chain rule. If 1. $x _ { 1 } \to x _ { 2 } \to \cdot \cdot \cdot \to x _ { n - 1 } \to x _ { n }$ , 2. c1 → xn, c2 → xn, . . . , cn−1 → xn, 3. x1c1xn, 4. x1c1x2, c1x2c2, x2c2x3, . . . , xn−2cn−2xn−1, $c _ { n - 2 } x _ { n - 1 } c _ { n - 1 } , x _ { n - 1 } c _ { n - 1 } x _ { n }$ , and 5. c1c2xn, c2c3xn, . . . , cn−2cn−1xn, then $x _ { 1 } \to x _ { n }$ . Proposition 19. Let $K$ be a positive commutative monoid. The contextual chain rule is sound for $\models _ { K }$ . Proof. We prove the case $K = \mathbb { B }$ ; the general case then follows from Prop. 14. Let $\mathcal { R }$ be a contextual family such that the assumptions of the contextual chain rule hold. Suppose toward contradiction that $\mathcal { R }$ does not satisfy $x _ { 1 } \to x _ { n }$ . Then $\mathcal { R }$ contains two assignments $s$ and $s ^ { \prime }$ in context $x _ { 1 } c _ { 1 } x _ { n }$ such that $s ( x _ { 1 } ) = s ^ { \prime } ( x _ { 1 } )$ and $s ( x _ { n } ) \ \neq \ s ^ { \prime } ( x _ { n } )$ . Consequently, $c _ { 1 } \ \to \ x _ { n }$ entails that $s ( c _ { 1 } ) \neq s ^ { \prime } ( c _ { 1 } )$ . These two assignments are illustrated in the top-left corner Fig. 5, where without loss of generality we use values 0 and 1. By local consistency, $\mathcal { R }$ then contains two assignments $t$ and $t ^ { \prime }$ in the context $x _ { 1 } c _ { 1 } x _ { 2 }$ such that $t ( x _ { 1 } ) \ = \ t ^ { \prime } ( x _ { 1 } )$ and $t ( x _ { n } ) \ \neq \ t ^ { \prime } ( x _ { n } )$ . Furthermore, $x _ { 1 } \ \to \ x _ { 2 }$ entails $t ( x _ { 2 } ) = t ^ { \prime } ( x _ { 2 } )$ . These assignments are illustrated in the top-middle table of x1 c1 xn x1 c1 x2 c1 x2 c2 0 0 0 0 0 0 0 0 b2 0 1 1 0 1 0 1 0 b′2 $x _ { 2 }$ $c _ { 2 }$ $x _ { 3 }$ $c _ { 2 }$ $x _ { 2 }$ $c _ { 3 }$ $x _ { n - 1 }$ $c _ { n - 1 }$ xn 0 $b _ { 2 }$ 0 $b _ { 2 }$ 0 $b _ { 3 }$ $0$ bn−1 0 0 $b _ { 2 } ^ { \prime }$ 0 $b _ { 2 } ^ { \prime }$ 0 $b _ { 3 } ^ { \prime }$ $0$ b′n−1 0 $c _ { 1 }$ $c _ { 2 }$ $x _ { n }$ $c _ { 2 }$ $c _ { 3 }$ xn $c _ { n - 2 }$ $c _ { n - 1 }$ xn 0 $b _ { 2 }$ 0 $b _ { 2 }$ $b _ { 3 }$ 0 . . . bn−2 $b _ { n - 1 }$ 0 1 $b _ { 2 } ^ { \prime }$ 1 $b _ { 2 } ^ { \prime }$ $b _ { 3 } ^ { \prime }$ 1 $b _ { n - 2 } ^ { \prime }$ $b _ { n - 1 } ^ { \prime }$ 1 Fig. 5, again using values 0 and 1. By local consistency and induction, the remaining contexts listed in item 4 of the contextual chain rule (ccr) contain assignment pairs of the form depicted in Fig. 5, with the values $b _ { i }$ and $b _ { i } ^ { \prime }$ being unknown. In particular, item 1 of ccr guarantees that the two assignments on the remaining relations agree on the variables $x _ { i }$ , $i \in [ n ]$ . Moving to the contexts listed in item 5 of ccr, the leftmost table in Fig. 6 illustrates two assignments for the context $c _ { 1 } c _ { 2 } x _ { n }$ ; the values of $c _ { 1 }$ and $c _ { 2 }$ are explained by local consistency with the top-right table of Fig. 5, and the values of $x _ { n }$ are explained by $c _ { 1 } \to x _ { n }$ together with local consistency with the top-left table in Fig. 5. The assignments for the context $c _ { 2 } c _ { 3 } x _ { n }$ in Fig. 6 arise by local consistency with the assignments for the context $c _ { 2 } x _ { 2 } c _ { 3 }$ in Fig. 5, and from $c _ { 2 } \to x _ { n }$ together with local consistency with the leftmost table in Fig. 6. Using this reasoning, by induction we eventually obtain the rightmost table in Fig. 6. However, observe now that the pair of variables $( c _ { n - 1 } , x _ { n } )$ is evaluated as $( 0 , 0 )$ and $( 0 , 1 )$ in Figures 5 and 6, respectively. By local consistency, this leads to a contradiction with the FD $c _ { n - 1 } \to x _ { n }$ . We thus conclude by contradiction that $\mathcal { R }$ satisfies $x _ { 1 } \to x _ { n }$ . □ The proof of the previous proposition suggests that even soundness of potential inference rules can be nontrivial to check for general FDs. We leave it for future work to analyse in more detail entailments between general FDs over contextual families. We conclude this section with the observation that the contextual transitivity rule is an instance of the contextual chain rule. This implies that reflexivity, augmentation, and contextual chain rule constitute a sound axiomatisation in the contextual setting, that is complete for derivations within a single context. We write $\mathcal { N R A }$ for this set of axioms. Proposition 20. Contextual transitivity rule is an instance of the contextual chain rule. Proof. It is easy to check that the contextual transitivity rule is obtained from the contextual chain rule by setting $n = 3$ , $x _ { 1 } = x$ , $x _ { 2 } = y$ , $x _ { 3 } = z$ , $c _ { 1 } = y = c _ { 2 }$ . Corollary 21. Let $K$ be a positive commutative monoid and $\Sigma \cup \{ \phi \}$ a set of FDs and x a set variables that contains all variables that occur in $\Sigma \cup \{ \phi \}$ . Then $\Sigma \cup \{ x x \} \vdash _ { K } \phi$ if and only if $\Sigma \cup \{ x x \} \vdash _ { \mathcal { N R A } } \phi$ . # 4.5 Complexity of Derivation Next, we turn to the complexity of the entailment or derivability problem. In the classical setting, there is a deterministic linear time algorithm to check whether a given set of FDs entails a given FD (Beeri and Bernstein 1979). In this section, we establish a polynomial-time algorithm for deciding whether a given unary FD can be derived from a given set of CDs and unary FDs by the use of reflexivity, cycle rule, and contextual chain rule. It then follows from Theorem 17 that the entailment problem for unary FDs and binary CDs is in polynomial time also in the contextual setting. Lemma 22. Let $\Sigma$ be a set of CDs and unary FDs and let $x , y$ be variables. Then the problem, whether $x y$ can be derived from $\Sigma$ by a single application of the contextual chain rule (page 8) can be decided in polynomial time with respect to $| \Sigma |$ . Proof. The problem can essentially be reduced to reachability in an extended dependency graph of $\Sigma$ . In order to apply the contextual chain rule for $x$ and $y$ , there needs to be a sequence of variables $x _ { 2 } , \ldots , x _ { n - 1 }$ and a sequence of variables $c _ { 1 } , \ldots , c _ { n - 1 }$ , such that the conditions of the rule are met. This means in particular that there is a “path” of FDs from $x$ to $y$ such that each “edge” $( x _ { i } , x _ { i + 1 } )$ in that path is witnessed by the respective $c _ { i } , c _ { i + 1 }$ in the sense of item 4 of the contextual chain rule. We construct the graph $( V , E )$ , where $V$ contains all pairs of variables in FDs in $\Sigma$ and $E$ contains an edge between $a$ and $b$ , if there are some $c _ { a } , c _ { b }$ satisfying the items 2 and 5 of the contextual chain rule, that witness that edge. We then additionally remove edges that do not satisfy item 5 of that rule and add a start node $s$ and a target node $t$ , with edges $( s , ( x _ { 1 } , b ) )$ , for all $b$ and $( ( x _ { n } , a ) , t )$ for all $a$ . Then there exists an $s - t$ path in that graph if and only if a single application of the contextual chain rule yields $x _ { 1 } \to x _ { n }$ . This algorithm is developed formally in Algorithm 1. Lemma 23. Let $\Sigma$ be a set of CDs and unary functional dependencies and let $x , y$ be variables. Then the problem, whether $x y$ can be derived from $\Sigma$ by $a$ single application of the cycle rule (see page 6) can be decided in polynomial time with respect to $| \Sigma |$ . Input : unary FDs $\mathcal { F D }$ , CDs $\boldsymbol { \mathscr { C } }$ , $x$ , $y$ 1 $V \{ ( a , c ) \mid a , c \in \operatorname { V a r s } \}$ ; // O(n2) 2 $C _ { y } \gets \{ C \in \mathcal { C } \mathrm { ~ | ~ } y \in C \}$ ; // O(n) 3 Wy $\left. \left\{ c \in \mathrm { V a r s } \mid \exists C \in C _ { y } : c \in C , c \right. y \in { \mathcal { F D } } \right\}$ ; // O(n2) 4 $\dot { E _ { y } } \gets \{ ( ( a , c _ { 1 } ) , ( b , c _ { 2 } ) )$ | a → b ∈ F D, {a, c1, b}, {c1, b, c2} ∈ C, c1, c2 ∈ Wy} ∪ {((a, c), (y, v)) | a → y ∈ F D, {a, c, y}, ∈ $\mathcal { C } , c \in W _ { y } \}$ ; // O(n6) 5 forall $c _ { 1 } \in W _ { y }$ do $\ d s / \ d s \mathcal { O } ( \ d n )$ 6 if $\{ x , c _ { 1 } , y \} \not \in C _ { y }$ then $/ / \ O ( n )$ 7 forall $( b , c _ { 2 } ) \in \mathrm { V a r s } \times W _ { y }$ do // $O ( n ^ { 2 } )$ 8 $E _ { y } \gets E _ { y } \setminus \{ ( ( x , c _ { 1 } ) , ( \bar { b } , c _ { 2 } ) ) \}$ ; // O(n4) 9 end 10 end 11 end 12 $E _ { y } \gets E _ { y } \cup \{ ( s , ( x , c ) ) \mid c \in W _ { y } \} \cup \{ ( ( y , c ) , t ) \mid c \in W _ { \overline { { y } } } \}$ $W _ { y } \}$ ; $/ / \ O ( n ^ { 4 } )$ 13 if there exists an $s - t$ path in $( V , E _ { y } )$ then $/ / \ O ( n ^ { 4 } )$ 14 Accept ; $\begin{array} { c } { { \prime \prime ~ { \mathcal { O } } ( 1 ) } } \\ { { \prime \prime ~ { \mathcal { O } } ( 1 ) } } \end{array}$ 15 else 16 Reject ; 17 end Algorithm 1: Algorithm for the advanced chain rule Proof. Interpret the unary FDs in $\Sigma$ as a graph, where the variables are the nodes and the dependencies are the edges. Compute the transitive closure of that graph, then check for all incoming edges $( a , y )$ in that transitive closure, whether $y a \in \Sigma$ . If $y a \in \Sigma$ , then accept, else reject. □ Theorem 24. Let $\Sigma$ be a set of CDs and unary functional dependencies. The set of all unary FDs that can be derived from $\Sigma$ by using the reflexivity rule, the contextual chain rule and the cycle rule can be computed in polynomial time with respect to $\Sigma$ . Proof. In order to compute the unary derivation closure DC of $\Sigma$ , we first compute all subsets of size 3 of CDs in $\Sigma$ and add the corresponding CDs to $\Sigma$ , so that the contextual chain rule can be applied. They are derivable by using the reflexivity rule and there are only polynomially many such subsets. We then proceed iteratively: For all pairs of variables $x , y$ , such that $x \to y \not \in \mathrm { D C }$ , check, whether DC entails $x y$ by either of the two other rules rules. If so, add $x y$ to DC. Repeat this process, until DC remains unchanged for all remaining variable pairs. There are only polynomially many pairs of variables and each rule takes polynomially many steps to verify, so each pass takes polynomial time. Since there are only polynomially many variables, this derivation closure can be at most polynomially large, implying that it can only change in polynomially many passes in this process. Therefore, the entire process takes polynomial time. □ Corollary 25. Given a set $\Sigma \cup \{ \phi \}$ of CDs and unary FDs, it can be decided in polynomial time whether $\phi$ can be derived from $\Sigma$ by using the reflexivity rule, the cycle rule and the contextual chain rule Corollary 26. Let $K$ be a positive commutative monoid. Given a set $\Sigma \cup \{ \phi \}$ of binary CDs and unary $F D s$ , $\Sigma \vdash _ { K } \phi$ can be decided in polynomial time.
Local consistency arises in diverse areas, including Bayesian statistics, relational databases, and quantum foundations. Likewise, the notion of functional dependence arises in all of these areas. We adopt a general approach to study logical inference in a setting that enables both global inconsistency and local consistency. Our approach builds upon pairwise consistent families of K-relations, i.e, relations with tuples annotated with elements of some positive commutative monoid. The framework covers, e.g., families of probability distributions arising from quantum experiments and their possibilistic counterparts. As a first step, we investigate the entailment problem for functional dependencies (FDs) in this setting. Notably, the transitivity rule for FDs is no longer sound, but can be replaced by two novel axiom schemes. We provide a complete axiomatisation and a PTIME algorithm for the entailment problem of unary FDs. In addition, we explore when contextual families over the Booleans have realisations as contextual families over various monoids.
[ "quant-ph", "cs.DB" ]
# 1. Introduction Climaborough12 is a research project, co-funded by the European Union and CINEA, aimed at bridging the gap between the design and implementation of urban innovations, tackling climate change and its consequential need for rapid adaptation and mitigation. Specifically, it aims to overcome the bottlenecks when transitioning from prototyping to testing and to market deployment of innovation. It is a four-year project started on January 1, 2023 and coordinated by ANCI Toscana3 with the participation of 27 additional partners, of which 14 European cities engaged in their ecological and digital transition. Seven work packages (WP) were defined to tackle different tasks: (WP1) Urban Planning and Climate Neutrality Evaluation, (WP2) Climaborough City Platform, (WP3) CLIMHUBS Setup, Co-Creation and Collaboration, (WP4) Innovative Procurement, (WP5) Climate Sandbox Demonstration in Real Environments, (WP6) Dissemination, Communication and Exploitation and (WP7) Management. The primary outcome of the project will be the development of a structured process incorporating a set of tactical tools designed in collaboration with domain experts, including an innovative procurement process aimed at accelerating cities’ capacity to implement climate transition strategies within urban planning. More specifically, as part of the process, cities are engaged in defining their specific needs, which are addressed through an innovative procurement process managed by ANCI Toscana. This process enables cities to identify and adopt disruptive solutions across various sectors, including energy, mobility, waste management, and circular economy. These solutions are subsequently implemented and tested using a sandbox methodology, and the resulting data is integrated into the platform. The impact of these initiatives is then assessed through a Climate Neutrality Framework4, which facilitates the estimation of their broader scalability and effectiveness and helps cities in their decision to adopt them or not at scale. To evaluate the progress and effectiveness of the solutions, and the project in general, it is critical to put in place an infrastructure to make sure the generated data (at the specific solution level, at the city level, at the project level...) will be monitored and evaluated. This includes defining (1) KPIs related to the solution specific goals, (2) Metrics to estimate the solution’s contribution to overarching climate KPIs, and (3) KPIs describing the overall cities’ progress towards climate related goals (such as reaching zero carbon emission) using aggregated data from solutions and beyond. To help cities assess the effectiveness of implemented solutions and better understand their potential impact on achieving climate neutrality on a larger scale, the Climaborough City Platform will be developed by WP2, led by the Daten-Kompetenzzentrum für Städte und Regionen5 (DKSR) in collaboration with the Luxembourg Institute of Science and Technology6 (LIST) and the Institut Mines-Télécom7 (IMT), and will incorporate a data ingestion pipeline, dashboards for visualizing results and a digital twin component. The creation of the dashboards is especially important and challenging, as it needs to cater to a variety of users and enable the adaptation of the core dashboard to the specific data and solutions under evaluation in a given city. To make things worse, this adaptation sometimes needs to be done by non-technical people in the public administration, as they rely on the visualization to support their decision-making. In this sense, this paper focuses on describing how the project has followed a low-code and no-code strategy [1, 2] to create these flexible dashboards in an optimal way, highlighting the benefits of this type of technologies in complex data manipulation and visualization scenarios such as that of this project. In short, the low-code part is used to speed up the development of the core dashboard components while a no-code mechanism, embedded in the generated dashboards, allows users to add, remove and configure the dashboard widgets. This strategy is implemented on top of the low-code platform BESSER8 [3]. Therefore, we typically refer to these dashboards as BESSER-dashboards. Next sections are structured as follows. Section 2 describes the Climaborough City Platform and its application in Climaborough. Then, Section 3 provides more details on the low-code respectively no-code contribution. Afterwards, Section 4 presents the current state of the platform and discusses some of the choices we made based on encountered challenges. Finally, Section 5 concludes this paper and presents the next steps. # 2. The Climaborough City Platform The BESSER-dashboards are part of the Climaborough City Platform. This platform is a datadriven system for monitoring and evaluating the effectiveness of urban climate solutions, while tracking progress toward the cities’ climate transition goals. It serves as a centralized data aggregator, integrating streams from heterogeneous sources. By consolidating these diverse inputs, the platform provides a comprehensive view of key performance indicators through its dashboards. Platform requirements were defined through co-creation workshops and interviews with cities and experts, in which predefined features were ranked based on their priority by the participants to ensure the functional and non-functional needs were met. In the following, we will outline the architecture of the Climaborough City Platform, as illustrated in Figure 1. The Data Ingestion Segment is responsible for gathering and managing data from diverse sources. It integrates historical records, real-time data provided by solution providers, and additional datasets from external climate initiatives such as Copernicus9. This layer relies on traditional data collection and processing techniques, ensuring robustness and reliability. Once data is ingested, it is processed in the Analytics Segment, where it is aggregated and analyzed based on Key Performance Indicators (KPIs). These KPIs, developed in collaboration with cities and domain experts, allow stakeholders to measure the effectiveness of their climate strategies. Additionally, this segment integrates a Digital Twin (DT), proposed by the IMT, which enables predictive modeling and scenario analysis. The DT leverages real-time and historical data to simulate the potential impacts of various climate strategies, helping cities anticipate challenges and optimize their policies accordingly. The final layer translates the processed data into actionable insights through visualizations on a dashboard that follows a no-code approach. Its primary goal is to allow cities to interactively explore and visualize climate-related data and KPIs without requiring technical expertise. The no-code interface enables users to create and modify dashboards with a simple drag-and-drop mechanism, significantly lowering the barrier to data-driven decision-making. Figure 1: Climaborough City Platform overview This part is where the platform’s low-code and no-code innovations take effect. In the following sections, we will explore how this approach enables rapid dashboard deployment, enhances usability and adoption, and empowers cities in their climate transition efforts. # 3. BESSER-dashboards: Low-code approach to configure no-code dashboards Our main objective with the BESSER-dashboards is to speed up the development of dashboards and their integration in data-driven projects while guaranteeing that the resulting dashboards can be created and adapted by non-technical stakeholders. As illustrated in Figure 2, our approach is built around BESSER, a robust low-code platform that guides developers through two primary stages: modeling and code generation. The creation of a final dashboard involves then two stages. In the first one, technical people select the data and the core features of the dashboard. In a second stage, non-technical people adapt the dashboard to their specific needs. Let us see both phases in more detail. During the modeling stage, designers and developers define a data model that forms the foundation of their application. While data modeling still needs to be led by technical people, the abstract nature (thanks to the use of graphical modeling languages) still allows for collaboration with non-technical stakeholders, allowing for their involvement at an early stage. This model is then fed directly into our automated code generators, which produce a complete backend and frontend environment for a web application consisting of the no-code dashboards and additional data management features. This process eliminates repetitive manual coding and ensures that the generated code is consistent, scalable, and tightly aligned with the specified data structure. In situations where a project already has an existing backend, BESSER can generate only the necessary additional components, ensuring seamless integration without requiring a complete system overhaul. This web application comes with a dashboard front-end preconfigured, via the use of low-code techniques, to be aware of the data model and include all the necessary connections to read the data from the backend. After this initial configuration, non-technical users are now allowed to modify all dashboard widgets. BESSER-dashboards embed a no-code philosophy, enabling diverse users such as city planners or sustainability officers to drag-and-drop widgets for tailored views. Users can select a datasource from a list and associate it with a visualization through simple drag-and-drop actions, eliminating the need for manual coding or technical expertise. The visualization is automatically configured based on the datasource schema for optimal clarity. Additionally, a conversational agent further simplifies interactions, allowing users of all technical background to adjust their dashboard or query data using natural language. Figure 2: Overview of BESSER-dashboards integration in Climaborough In addition to the basic dashboard interaction, an AI-powered multilingual conversational agent is also available on the dashboard. This agent plays two different roles: 1. Help in the no-code strategy by offering a conversational (textual and audio) dashboard interaction to let users create, modify and read dashboard content by directly chatting with the agent. 2. Answer data questions. Complementing the visualizations, cities can also ask the agent questions. Thanks to its curated internal knowledge and the use of Retrieval-Augmented Generation (RAG), which integrates Large Language Models (LLMs), the agent is able to provide useful answers. The knowledge body is being created in collaboration with domain experts from other work packages, while also including research and results from other climate initiatives such as NetZeroCities10, and Climaborough project deliverables and data. This facilitates both the knowledge transfer from experts and replication of implemented solutions, given that many of the challenges cities face are not unique. This process is depicted in Figure 3. # 4. Current state of the Climaborough City Platform An early version of a data processing module is under development, exploring different technologies to streamline data ingestion and transformation. This module is currently tested with various approaches, including direct integration from source like Google Drive to API Endpoints. To improve data accessibility and organization, the development of a data catalog is being considered Figure 3: Overview of pipeline to create RAG powered Agent Figure 4: Screenshot of the dashboard editor, showcasing the drag-and-drop interface alongside the agent for creating dashboards At the same time, the BESSER-dashboards leverage a structured backend and an interactive frontend that streamline the dashboard creation. The backend leverages a model-driven approach, enabling the automatic generation of a REST API and a database schema directly from the defined metamodel, ensuring consistency between data structures and API endpoints. On the frontend, the no-code interface provides already significant flexibility and customization possibilities, as users can drag and drop visual components, position and resize them dynamically and adapt the design and descriptions of the visualizations (e.g., changing the color of a visualization or the title). The low-code process is already in place, and in collaboration with the cities, we have started modeling the data models corresponding to their needs. Figure 4 contains a screenshot of the dashboard creator, where one can see the possible visualizations that the dashboard supports. The conversational agent-based dashboard creation feature further streamlines the process by allowing users to generate and modify dashboards through natural language commands. Currently in an advanced prototyping phase, efforts are focused on enhancing contextual understanding and enabling more complex customization options. Additionally, the RAG features are in place, which provide structured climate-related insights. Ongoing improvements aim to refine response accuracy and expand document coverage with expert contributions. # 4.1. Discussion In this section, we cover some reflections from the use of information system techniques and in particular low-code / no-code techniques in the context of the project. Flexibility of model-based and low-code approaches One of the initial choices we considered was using an existing dashboard tool for the Climaborough City Platform (e.g. Grafana). However, after careful evaluation, we decided to develop our own solution to maintain full control over its features, customization, and adaptability. Moreover, thanks to following a model-based approach, we could easily change the tech stack targeted by the code generators in case cities had some preferences / restrictions in the type of infrastructure they wanted to use. Given the evolving requirements of a research project involving so many partners, this was a critical concern that low-code helps us to address. Balancing low-code and no-code approaches Another crucial consideration was whether to implement a fully low-code or no-code solution instead of a mixed one as we have done in the project. A fully no-code approach would have introduced many limitations as users would have been restricted to a predefined number of templates to build the complete dashboard. A fully low-code one would have offer full flexibility but required technical expertise during the customization phase. Our mixed approach aims to combine both worlds. Need for AI-enhanced components While our dashboard is built on core information systems and model-based techniques, it was inevitable to add also AI techniques in the mix. Users expect them (e.g. in the form of a conversational agent). Therefore, it is clear that, more and more, we need to combine classical engineering techniques with AI ones to build smart software systems [4]. Fortunately, BESSER is already created with this goal in mind and it was therefore easy to integrate conversational agent development in our low-code process. Early validation Preliminary testing by our project partners has demonstrated significant interest in the dashboard’s core features. Cities have responded positively to the intuitive nature of the no-code interface, highlighting its ease of use and accessibility. The positive feedback from partner cities highlights how blending low-code for setup with no-code for user interaction makes climate data visualization more accessible and engaging. Cities have embraced the ability to effortlessly shape their own dashboards, giving them direct control over their data without needing technical expertise. This hands-on approach fosters greater involvement and ownership in their climate monitoring efforts.
The EU-funded Climaborough project supports European cities to achieve carbon neutrality by 2030. Eleven cities in nine countries will deploy in real conditions products and services fostering climate transition in their local environment. The Climaborough City Platform is being developed to monitor the cities' overall progress towards their climate goals by aggregating historic and real-time data and displaying the results in user-friendly dashboards that will be used by non-technical experts to evaluate the effectiveness of local experimental initiatives, identify those that yield significant impact, and assess the potential consequences of scaling them up to a broader level. In this paper, we explain how we have put in place a low-code/no-code strategy in Climaborough in response to the project's aim to quickly deploy climate dashboards. A low-code strategy is used to accelerate the development of the dashboards. The dashboards embed a no-code philosophy that enables all types of citizen profiles to configure and adapt the dashboard to their specific needs.
[ "cs.SE", "cs.AI", "cs.CY" ]
# I. INTRODUCTION Formally specifying software has remained one of the challenging tasks in software engineering. The challenges are apparent: the specifications must be well-defined, unambiguous, complete, consistent, and aligned with the stakeholder needs. Being mostly human-centric, developing more formal specifications poses more technical challenges, whereas natural language issues greatly influence more human-readable specifications. The emergence of large language models has created an opportunity to automate and to enhance RE processes, particularly in drafting Software Requirements Specifications (SRS), validating requirements, prioritizing tasks, and even translating textual requirements into executable software artifacts. Recent review studies have suggested revolutionizing the software engineering domain by using LLMs across various phases of software development, particularly in code generation, summarization, translation, and quality evaluations [1]– [3]. While the capabilities of LLMs are being formally investigated, a growing number of reviews suggest using humansupervised LLM-assisted approaches to be more effective towards optimising software development efforts [1], [2]. The application of LLMs in RE is also gaining significant traction, with studies exploring their potential for automating key RE tasks such as requirements drafting, validation, prioritization, completeness checking, and transformation into models or code [4]–[10]. Some preliminary evidence suggests LLMs can translate natural language requirements to a target template or semi-formal language, but rigorous evaluations are needed [11]. LLMs demonstrate promise in improving efficiency and reducing manual effort to the extent of claiming that most software development is to be reduced to requirement engineering and system testing [9], [12]. However, others see challenges related to accuracy, relevancy, completeness, and domain adaptation [4], [13]. While LLMs can potentially enhance requirement engineering processes [14], their performance in real-world scenarios needs empirical assessments. Factors such as application domains, prompt engineering techniques, and dealing with scale are crucial in determining their usefulness. Moreover, a SWOT analysis of LLM applications in RE highlights the need for better automation and integration into development workflows [14]. A recent review of challenges in applying LLMs to RE tasks further emphasizes the necessity of domainspecific assessments and standard evaluation frameworks [13]. Functional requirements of a software system describe each functionality that a user needs to carry out the intended task to support a specific business process. The description must be clear, consistent, and include all relevant information that can be fed to the code development process and to validate the software. Functional specifications may take more formal or less formal shape, depending on the nature of the software to be developed and the choice of the developers. However, a structural approach is essential to argue for its completeness, consistency, and unambiguous nature. For instance, a comprehensive functional specification can be achieved for domain-specific enterprise software through three key specification artifacts: use cases, workflows, and business rules. Use cases describe interactions between users (actors) and the system in a step-by-step fashion, ensuring all interaction scenarios are explicitly captured [15]–[18]. Business rules define the events, conditions, and action logic depicting system behavior, ensuring system compliance with domain requirements and procedures [19]–[21]. Workflows capture the interaction sequences involving multiple actors for achieving certain desired objectives [22], [23]. These three artifacts complement each other and strengthen functional requirements specification [17]. Given the increasing adoption of LLMs in requirements engineering, evaluating their ability to generate and structure these artifacts can provide insights into their effectiveness in automating and enhancing the software requirements specification process. Specifically, we investigate the quality of functional specifications generated by popular LLMs, such as GPT, Gemini, Claude, and DeepSeek, for developing domainspecific enterprise software, to evaluate and highlight their capabilities and limitations in specifying functional requirements in common structural formulations. This paper presents a case study using large language models (LLMs) to generate functional specifications for a web application, the Mess Management System. The study assessed the quality of LLM-generated use cases, business rules, and collaborative workflows regarding their syntactic and semantic correctness, consistency, non-ambiguity, and completeness compared to the reference specifications against the zero-shot prompted problem statement. Our results suggested that all four LLMs can create syntactically and semantically correct, non-ambiguous specification artifacts, but they may be inconsistent and differ significantly in completeness. The use cases generated by Claude and Gemini included all the reference use cases. On the other hand, GPT generated the least number of reference use cases, but they were the most complete use cases among all LLMgenerated reference use cases. Claude’s use cases included some redundancy, whereas those from Gemini were precise but with less Recall. Except for Gemini, all other LLMS generated all the reference workflows. Claude-generated workflows were the most complete, followed by GPT, Gemini, and DeepSeek. All four LLMs struggled to generate relevant Business Rules, with DeepSeek generating the most rules but with less completeness, followed by Gemini, GPT, and Claude. Overall, Claude generated more complete specification artifacts, whereas Gemini was more precise regarding generated specifications. We also encountered some instances, mainly from Claude and DeepSeek, providing additional, relevant details that can enhance the rigor of the functional specification. This suggested the valuable assistance of such LLMs in generating quality specifications. The rest of the paper is organized as follows: we present related work in the next section, establishing the study’s motivation. We present the research methodology in Section 3, the case study results in Section 4, and insights into how well each model generated template specifications in Section 5, including the validity of our findings. Finally, we present our conclusions in Section 6 and possible future directions. # II. RELATED WORK After demonstrating the successful use of LLMs in assisting coding-level tasks, researchers have been encouraged to use them in other, more involved software engineering phases like RE. Some of the recent studies reported early experience with such tasks. For instance, LLMs have been used to generate Software Requirements Specification (SRS) documents [6], prioritize requirements in agile frameworks [8], transform textual requirements into executable code [12], and detect incompleteness in requirement specifications [7]. Ronanki et. al. compared requirements generated by five RE experts and ChatGPT over seven different requirements quality attributes and concluded that the ChatGPT-generated requirements were reasonably correct and human-understandable [9]. Lubos et. al. employed Llama-2 to assess the quality of the software requirements as per the ISO 29148 standard. Their study involved software engineers using the LLM’s reviews to identify and improve the specifications’ issues [5]. Krishna et. al. used GPT and CodeLlama to generate Software Requirements Specification (SRS) documents and compared them to those produced by entry-level engineers [6]. The authors argued that LLM-generated SRS documents were similar, and in some cases even better. More specifically, GPT helped identify inconsistencies in requirement specifications. However, the authors also found that the other LLM-CodeLlama’s assistance was not helpful [6]. While LLMs can potentially help improve RE tasks, their specific performance in real-world scenarios remains to be seen. How domain specificity, prompt engineering techniques, and model scalability influence the successful adoption of LLMs in RE workflow is also unclear. Moreover, a SWOT analysis of LLM applications in RE highlights the need for better automation and integration into software development workflows [14]. A recent review of challenges in applying LLMs to RE tasks further emphasizes the necessity of standard evaluation frameworks [13] for better generalizations. We conduct a comparative evaluation of multiple LLMs in generating functional requirements for a web application within a case study framework. Specifying functional requirements using structural artifacts should allow us to assess LLMs’ abilities to generate syntactically and semantically correct, consistent, unambiguous, and complete specification artifacts in a specific domain. By assessing LLMs in a realworld web application scenario, this research can help understand how well these models align with software engineering best practices and where improvements are needed. The results could inform best practices for selecting and fine-tuning LLMs for RE tasks, ultimately leading to more reliable and efficient requirements generation processes. # III. RESEARCH METHODOLOGY This section introduces the various elements of our casestudy-based research methodology. A single case study design approach focuses on the Mess Management System. This software was designed for our institute to automate the business operations related to two student messes on campus. Following Agile development methodologies, the system was designed using Django for the backend and React.js for the frontend. A detailed functional specification document was already available, which served as the reference specification to evaluate the quality of the functional specification artifacts generated by the four LLMs using suitable prompts. # A. The Functional Specification The functional specification consists of three significant key specification artifacts: Use Cases [15]–[17] – define the interactions between users (actors) and the system, specifying what functionalities the system provides from a user perspective. Each use case is an asynchronous functionality independently exercised by the actors involved and is documented using the use case template given in Figure 1. Use cases can comprehensively describe the functionality of an interactive software system [18]. Fig. 1. The Use case specification template used in the study Business Rules [19]–[21] – define the constraints, conditions, and logic governing the system’s behavior in different scenarios and use cases. The scope of a business rule may span a use case, an actor, a workflow, or the system. We use a suitable specification format for a business rule as given in Figure 2. Fig. 2. The Business rule specification template used in the study Fig. 3. The Workflow specification template used in the study Workflows [22], [23] – represent the sequence of activities or steps to achieve specific functionalities involving multiple actors and their specific use cases. Workflows are significant specifications for capturing collaborative work involving multiple actors and span multiple use cases across multiple actors involved. We use a suitable specification format for a workflow as given in Figure 3. Workflow ID <Unique identifier,e.g.,WF-001> Workflow <Descriptive name of the workflow> Name Description <Brief purpose and scope of the workflow> Actors Involved <List of users,roles,or systems participating> Trigger <Event or condition that starts the workflow> Preconditions <Conditions that must be metbefore workflow starts> <Conditions that must be true after the workflow completes Post-conditions nominally> Activity Flow Step Name Performed by (Actor) Description Next Type Step A1 Activity <Actor> A2 A2 Activity <Actor> D1 D1 Decision . E1 Exit A3 Activity .. Exception Exception ID Description Handling Steps EXC-1 # B. The Case Study Problem - Mess Management System The Mess Management System supports functionality corresponding to three actors (users) -students, the mess caretaker, and the mess warden - each with specific roles and responsibilities. The system manages mess operations through these three actors, providing functionalities such as user registration/deregistration, announcements, food services, billing, menu reviews and feedback, rebate handling, and general reporting. The use case diagram, depicting actors and their respective use cases, is shown in Figure 4. Examples of the reference specification artifacts are included in Figure 5 for quick reference. The problem statement, the requirement specification documents consisting of the reference use cases, business rules, and workflows, prompts used to generate the specification artifacts using the four LLMs, the generated specification artifacts, computed data, and the detailed analysis documents are included in [24]. # C. LLMs used in the study This study aims to systematically compare four leading contemporary LLMs—GPT, Gemini, Claude, and DeepSeek—in generating requirements for a web application. These models have been chosen based on their architectural diversity, training methodologies, and relevance to natural language understanding and generation. GPT version 4o (OpenAI [25]) – It is perhaps the most popular contemporary LLM, which can understand users’ queries contextually and provide relevant responses. It has been successfully applied in RE tasks such as requirements generation and validation [5], [6], making it a strong candidate for evaluation. (knowledge cutoff date - Oct, 2023) Gemini version 2.0 Flash (Google DeepMind [26]) – A lightweight and high-speed AI model optimized for low-latency generation. Evaluating Gemini in requirement specification artifact generation could provide insights into how well it understands and handles complex, structured requirements compared to other LLMs. (knowledge cutoff date - Jun, 2024) Fig. 4. The Use case diagram of the Mess Management System Claude version 3.7 sonnet (Anthropic [27]) – A model developed with a strong focus on safety, factual correctness, interpretability, and user alignment, Claude emphasis on reducing hallucinations, which makes it an important model to assess in the RE context, where the accuracy and completeness of requirements are crucial. (knowledge cutoff date - Oct, 2024) DeepSeek version V3 [28] – A relatively new entrant in the LLM space, DeepSeek is known for its transparent development approach. Its inclusion in this study should highlight how research-driven alternative models perform against leading industrial models like GPT, Claude, and Gemini in generating domain-specific requirements. (knowledge cutoff date - Jul, 2024) # D. Research Questions This study investigates the following research questions: 1) RQ1: How effective are LLMs in identifying and specifying use cases to be supported by the system while delivering the intended functionalities to its various actors (users)? 2) RQ2: How effective are LLMs in identifying and specifying business rules, i.e., the additional constraints to be met by the system while delivering the intended functionalities to its various actors? 3) RQ3: How effective are LLMs in identifying and specifying workflows involving collaborative work among multiple system actors? # E. Criteria for Evaluation The effectiveness of the LLM-generated specification artifacts was measured by evaluating the quality of the generated specification. We used the following evaluation criteria to assess the quality of the generated use cases, business rules, and workflow specifications. 1) Syntactic & Semantic Correctness: Measures grammatical accuracy, logical structure, proper use of terminology, and whether statements make sense in context. 2) Consistency: Measures internal coherence, ensuring no contradictions exist across different parts of the artifact. Penalizes unnecessary redundancy if it leads to confusion or contradiction. # USE CASE ID:006 NAME:Apply for Mess Rebate DESCRIPTIoN:Student submits a request for mess rebate due to absence or unable to use the mess service PARTICIPATING ACTORS: Student # PRECONDITION: Student is also registered for the mess service MAIN FLOW: Student submits the rebate request [A1] [A2] ALTERNATE FLOWS: A1 If the form data is incomplete Student completes the required fields A2 If student already has a pending rebate request for the same period SUBFLOWS: Nil POST-CONDITIONS (For all exits) MAIN FLow:1.A notification of the request is sent to Mess A1: Student returns to complete the form (a) RULE ID:004 RULE NAME:Mess Rebate Eligibility DESCRIPTION: duration of leave are valid and within policy limits. RULE FORMAT: THEN the Mess Caretaker approves the request APPLICABLE SCOPE: EXCEPTION: Caretaker may override the policy limits. (b) WORKFLOW ID:002 WORKFLOW NAME:Mess Rebate Application Process DESCRIPTIoN:Process for handling student requests for mess rebates ACTORS INvOLVED:Student,Mess Caretaker TRIGGER:Student submits a rebate request PRE-CONDITIoNs:Student is registered for mess service POST-CoNDITIoNs:Rebate is approved/rejected,and bill is adjusted if Exception ID Description Handling Steps EXC-1 Rebate duration 1.System automatically limits exceeds maximum rebate to maximum allowed period allowed period 2.System notifies student of the adjustment (c) EXCEPTIONS: 3) Non-Ambiguity : Assesses clarity, ensuring that statements are precise and leave no room for multiple interpretations. 4) Completeness: Evaluates whether the generated specification artifact includes all essential components required for a fully functional and well-defined requirement. Here, the completeness of the generated specification artifact is measured against an available reference specification by obtaining measures like Precision, Recall, and $F l$ Score, which are defined as follows: $$ \begin{array} { c } { P r e c i s i o n = \displaystyle \frac { T P } { T P + F P } } \\ { R e c a l l = \displaystyle \frac { T P } { T P + F N } } \\ { F 1 M e a s u r e = 2 * \displaystyle \frac { P r e c i s i o n * R e c a l l } { P r e c i s i o n + R e c a l l } } \end{array} $$ Where, True Positive $( T P )$ - is the number of specification elements included in the generated and reference specification. False Positives $( F P )$ - is the number of additional (unnecessary or incorrect) specification elements included in the generated specification that were not present in the reference specification. False Negatives (FN) - is the number of necessary specification elements missing in the generated specification that were present in the reference specification. Figure 6 shows how each evaluation criterion was assessed and measured on specific aspects of each functional specification artifact. The first three of the four evaluation criteria are evaluated on a ten-point scale as given in Table I. The fourth criterion, i.e., the completeness of the generated specification, is assessed by comparing the generated specification with the reference specification, computing the scores for $T P , F P$ , and FN in the defined manner, and then computing Precision, Recall, and $F l$ scores. Evaluation Parameters Use Cases Business Rules Workflows Criteria Syntactic & Grammatical correctness Look for grammatical mistakes or awkward sentence structures.ls the language grammaticaly correct? Semantic Correctness Logical correctness Do the interaction sequences Do the rules logically align with Are activities arranged logically and follow natural and logical order? system operations? sequentially? Terminology accuracy Are domain-specific terms for the specification artifact used correctly? Consistency Internal consistency Do use case elements contradict Do rule elements conflict with one Does the workflow follow a rational themselves? another? sequence without contradictions? External consistency Do use cases contradict each Do rules contradict one another? Do workflows contradict/conflicts other? each other? Cross-specification Do use cases align with workflows Do business rules align with Does the workflow align with consistency and business rules existing workflows and policies? established business rules? Non - Clarity & specificity Do use cases have clear and Do rules explicitly stated without Are actions clearly defined and free Ambiguity enough information to be vague conditions? of ambiguity? actionable? Avoidance of implicit Are assumptions clearly stated Do rules clearly define conditions Are actions/decision points involve assumptions rather than implied? rather than assume knowledge? any implicit assumptions? Completeness Alignment with reference Compare against a reference Do rules align with the reference Does the workflow match reference $\mathbf { \sigma } = \mathbf { \sigma }$ Similarity to artifact (TP) specification for alignment business rules? processes? Reference (Precision, Missing Information Identify missing steps,conditions, Is the business rule less restrictive Does the workflow miss any detail Recall, and (FN) actors,or flows. than the reference business rule? which should be present? Accuracy) Incorrect and useless Does the specification artifact have any redundant or incorrect details? Information (FP) Fig. 6. The evaluation criteria and assessed parameters of the generated specification artifacts TABLE I SCORE ASSIGNED FOR SYNTACTIC AND SEMANTIC CORRECTNESS, CONSISTENCY, AND NON-AMBIGUITY CRITERIA # F. Research Procedure The study assesses the abilities of four LLMs—GPT, Claude, Gemini, and DeepSeek—to generate and specify functional specifications using three key specification artifacts: use cases, workflows, and business rules against the problem statement of the Mess Management System. The problem statement describes the functionality to be exercised by the three actors to satisfy the system objectives and purposes. The evaluation process was designed to minimize biases and ensure a fair comparison of the models. After some initial interactions with the four LLMs, we finalized a zero-shot prompt approach to query each LLM and generate all three functional specification artifacts against the problem statement. The four LLMs were independently prompted to create functional specifications for the problem statement. The models were instructed to structure their output explicitly in the form of the relevant template descriptions, as shown in Figures 1, 2, and 3, respectively. In pair-evaluation mode, we carried out the scoring process in two phases. In the first phase, two authors used the three generated specification artifacts from the four source LLMs and computed the scores on each of the four evaluation criteria. All the parameters given in Figure 6 against each criterion for each reference specification artifact (a use case, a business rule, or a workflow) were assessed on a scale of zero to ten and then averaged to compute the final score against that criterion. In the second phase, two other authors reviewed and finalized scores. The two groups further discussed significant disagreements between scores and were allowed to update their respective scores. We assessed the inter-rater agreement by computing the Intraclass Correlation Coefficient (ICC) [29] between the scores obtained in the two phases, which was 0.82, indicating a significant level of agreement between the scores. To eliminate evaluator bias, a blind review process was implemented. The generated outputs were anonymized, meaning that the two evaluators were not informed about which LLM had produced which set of artifacts. The responses were randomized and assigned unique identifiers unrelated to the source model. This approach ensured that evaluators focused solely on the quality of the requirements generated rather than any preconceived notions about a specific LLM’s capabilities. # IV. RESULTS AND ANALYSIS We undertook the case study as per the procedure given in Section III-F. We collected the data corresponding to the four evaluation criteria against the functional specification artifacts generated by each of the four LLMs. Table II shows the number of reference specification artifacts (use cases, business rules, and workflows) generated by each of the four LLMs against the total number of specifications generated. We computed the scores against each criterion for the reference artifacts generated by each of the four LLMs. All the relevant documents for the case study, including zero-shot prompts, reference specifications, generated specifications, and collected data, are available in [24] for reproducibility and transparency. In the following sections, we discuss the performance of each of the four LLMs based on the scores computed for the four evaluation criteria against the generated specification artifacts. We used F-1 scores for completeness evaluation. We observed false positives contributed by two major types of discrepancies: irrelevant (or incorrect) and/or redundant details included in the generated specification. We found that the LLMs’ outputs were distinctly dominated by these two types of discrepancies, with the former being more serious to deal with. To investigate their individual contributions in obtaining the completeness scores, we computed Precision (Redundancy) and Precision (Incorrectness) scores separately for both types of discrepancies present in the generated artifacts. # A. Use case Identification and Specifications Based on the data collected for all twenty-five use cases, the four evaluation criteria and the number of use cases generated by the four LLMs are given in Figure 7. Gemini and Claude could identify all the relevant use cases, whereas GPT and DeepSeek could identify a subset. All four LLMs’ use case specifications had remarkably high syntactic and semantic correctness, consistency, and non-ambiguity values. Except for DeepSeek, the three other LLMs were able to generate significantly complete specifications compared to the reference use case specification. Figure 8 shows the results of a detailed analysis of the generated specifications on the completeness aspect. Though GPT failed to develop some essential use cases, the generated specifications had relatively high Recall and Precision without missing important details or being redundant. While DeepSeek identified more use cases than GPT, it missed some more significant ones. Although less redundant, DeepSeek scored lowest in various completeness scores because the generated specifications missed some relevant information. Claude generated all the reference use cases with high Recall, but redundant information compromised its Precision. On the other hand, Gemini also generated all the reference use cases, but its Recall was lower due to missing information. Claude and Gemini had similar Precision scores on incorrect information in the generated specifications. For argument’s sake, Claude could have outperformed all other LLMs on the resulting F-1 scores if we had underscored the redundant information in the generated use cases. Fig. 7. Comparison based on Use case specification generated by the four LLMs Fig. 8. Completeness of the Use case specifications generated by the four LLMs In some instances, Claude also provided extra but relevant information that could enhance the rigor of the specification. Examples include suggesting an additional textitalternate flow for form validation, reasoning notes about accepting or rejecting requests, and timings for monthly bill calculation in preconditions. We did not account for this information when computing redundancy scores. # B. Business Rules Identification and Specification The number of reference business rules generated by the four LLMs and scores obtained against the four evaluation criteria are given in Figure 9. Once again, we found the generated business rules to be high on syntactic and semantic correctness and non-ambiguous. GPT and Gemini produced highly consistent rules, whereas Claude generated rules that included highly redundant information, scoring the lowest on consistency. All four LLMs had struggled to create relevant business rules, which was a striking difference in the LLM-generated three functional specifications artifacts. DeepSeek generated ten business rules, including five reference business rules, scoring the highest among all LLMs. Claude generated twentyfive, of which we found only two rules relevant to the reference specification. The rest of the rules were either irrelevant or redundant information already included in other generated specifications. We made similar observations about all other LLMs on the generated business rules, which suggested their limitations in creating domain-specific business rules for a given scope. TABLE II NUMBER OF SPECIFICATION ARTIFACTS GENERATED BY THE FOUR LLMS Fig. 9. Comparison based on Business Rule specifications generated by the four LLMs Fig. 10. Completeness of the Business Rule specifications generated by the four LLMs Figure 10 shows the results of a detailed completeness analysis of the generated workflow specifications. Although Claud generated only two business rules, it scored highest on the completeness score and achieved the highest precision among the four LLMs. On the other hand, DeepSeek generated the maximum number of reference rules with the highest Recall value; its precision was lower than Claud due to some extra irrelevant information in the generated specification. Fig. 11. Comparison based on Workflow specifications generated by the four LLMs Fig. 12. Completeness of the Workflow specifications generated by the four LLMs # C. Workflows Identification and Specification The number of workflows generated by the four LLMs and scores obtained against the four evaluation criteria are given in Figure 11. It confirms the syntactic and semantic correctness and unambiguous nature of LLM-generated workflow specifications. Also, these specifications were mainly consistent. Except for Gemini, all other LLMs generated all four reference workflows. Figure 12 shows the results of a detailed analysis of the generated workflow specifications on the completeness aspect. Claude outperformed all other LLMs with the highest Recall. Again, its low Precision was attributed to redundant information. Gemini has missed one workflow, but its precision was highest among generated workflows by all the LLMs. GPT also produced highly precise workflows, but its Recall was the lowest among all four LLMs due to missing relevant information. Again, here, Claude provided extra but useful information that could enhance the rigor of the specification. For example, remind the caretaker of pending requests if no action has been performed. Again, we did not account for such information when computing redundancy scores. # V. DISCUSSIONS The Mess Management System, although smaller in scope compared to typical enterprise resource planning (ERP) solutions, arguably qualifies as a domain-specific enterprise software application. It supports collaboration among multiple users in a role-based administrative setup, delivering intended functionality such as meal planning, request handling, and billing, within an institutional context through defined organizational business processes and rules. Here, we acknowledge that LLM outputs are sensitive to the phrasing of the prompt. To mitigate this factor, we refined our prompts through initial testing and ensured the final prompts were task-specific, straightforward, minimal, and unambiguous. The same prompts were given to all the competing LLMs. Still, we admit that a more detailed analysis of the diversity of prompts’ phrasing or styling could provide valuable insights on this account. We found that all the models generated mostly syntactically and semantically accurate responses, with unambiguous specifications, indicating their ability to deal with the most challenging aspects of natural language processing. All models could correctly tell the cross-specification references, indicating that they could connect the generated artifacts logically. However, the generated specifications also had logical inaccuracies and/or redundant information and differed significantly in completeness. Based on our observations in the study, we summarize the nature of the specifications generated by each of the four models as follows. GPT - generated specifications based on the information explicitly included in the problem statement, and was found to be a little conservative in its approach to providing additional details. So, it was a bit more precise and less redundant in generating specifications. Consequently, its conservative approach resulted in missing out on typical scenarios and was unable to provide more relevant details. Deepseek - seems to be following a more generalizing approach, relating the problem’s details and context to similar situations. Accordingly, it suggested more possible options while generating artifacts, such as business rules. However, some additional information turned out to be irrelevant or incorrect in the specific context, making it less precise for generating artifacts such as use cases and workflows. Gemini - generated compact and precise information in use cases and workflows, but failed to recognize applicable business rules. It precisely detected the use cases. This compact nature may cause it to miss information, evident from its poorer Recall. However, it seemed to follow a conservative approach similar to GPT in providing additional details unless explicitly mentioned. Claude - provided more detailed and relevant use cases and workflow specifications by enumerating a larger number of alternate business scenarios. It generated all reference use cases and workflows, elaborating each step in detail, which resulted in the highest recall scores. However, this approach also incorporated redundancy, which led to lower precision. At times, it provided additional yet relevant candidate details that could have been included to further enhance the specification. While evaluating the completeness of the LLM-generated specification artifacts against the reference specification, we found that these LLMs generated some extra but relevant information that was not in the reference specification, which did not affect the logical flows, like form validation, or can be a helpful insight like precautions taken by an actor while doing an activity. Accordingly, we ignored them while evaluating the scores. On the other hand, we also encountered additional helpful information that was not in the reference specification. For instance, the use case specification for make announcement use case generated by Claude included sending an announcement to the target group rather than sending it to the default group. Another example involves DeepSeek suggesting that special food requests be sent at least three days in advance, as per Business Rule ID 013. All such instances and models exhibiting high Precision and Recall suggested significant assistance from LLMs in requirement generation for similar applications. We followed a blind review process to eliminate bias from the evaluators. Although in this case study, we received valuable and significant insights into the capabilities of these LLMs in generating requirements in a structured specification format, we also acknowledge the limitation of generalizing the observations. The observations are based on a small but real software with a small number of use cases, business rules, and workflows. The workflows are of low-to-moderate complexity. In these circumstances, the observations are evidence of the kind of assistance that can be obtained from different LLMs in requirement specifications. At the same time, our findings confirm the LLM’s capabilities for generating high-quality requirement specifications, as observed in other studies [5], [6], [9]. We summarize the key findings of the study as follows1: 1) All the models could generate syntactic and semantically correct, non-ambiguous functional specification artifacts. However, they differed significantly in consistency (including redundancy) and completeness. 2) Overall, Claude generated more complete specification artifacts, whereas Gemini was more precise regarding generated specifications. 3) All models struggled to generate relevant business rules. 4) Some models provided additional, yet relevant, candidate details that could be included to further enhance the specification. 5) LLMs can significantly assist in generating and specifying software requirements. Knowing the nature of the LLM to be used in this task can be useful.
Like any other discipline, Large Language Models (LLMs) have significantly impacted software engineering by helping developers generate the required artifacts across various phases of software development. This paper presents a case study comparing the performance of popular LLMs GPT, Claude, Gemini, and DeepSeek in generating functional specifications that include use cases, business rules, and collaborative workflows for a web application, the Mess Management System. The study evaluated the quality of LLM generated use cases, business rules, and collaborative workflows in terms of their syntactic and semantic correctness, consistency, non ambiguity, and completeness compared to the reference specifications against the zero-shot prompted problem statement. Our results suggested that all four LLMs can specify syntactically and semantically correct, mostly non-ambiguous artifacts. Still, they may be inconsistent at times and may differ significantly in the completeness of the generated specification. Claude and Gemini generated all the reference use cases, with Claude achieving the most complete but somewhat redundant use case specifications. Similar results were obtained for specifying workflows. However, all four LLMs struggled to generate relevant Business Rules, with DeepSeek generating the most reference rules but with less completeness. Overall, Claude generated more complete specification artifacts, while Gemini was more precise in the specifications it generated.
[ "cs.SE", "cs.AI" ]
# 1 Introduction With global economic integration and increasing air transport demand, the aviation industry faces dual pressures from accelerated technological advancement and rising safety standards. Aviation theory training serves as the core foundation for aviation safety operations, providing essential theoretical support to pilots, maintenance personnel, and other professionals handling complex flight scenarios. Traditional aviation theory training is based on standardized materials, simulator-based exercises, and instructor-led knowledge transfer. However, these methods face significant limitations, including limited pedagogical resources, delayed knowledge updates, and insufficient specialized question-answering systems. These shortcomings become particularly evident when addressing contemporary challenges, such as dynamic decision making in emerging scenarios, rapidly evolving knowledge requirements, and diverse personalized learning needs. The breakthrough in the quality of the content generated by large language models (LLMs) brings opportunities for innovation in the field of aviation theory training. Using LLMs, the aviation training sector can address personnel shortages while simultaneously reducing operational costs and improving pilot training efficiency. LLMs are predominantly pre-trained on large-scale general-purpose datasets. Although such broad training equips models with robust general language understanding, they underperform in tasks requiring domain-specific expertise. For example, in scientific text processing [1], models must be capable of comprehending complex scientific terminology, concepts, and methodologies to generate accurate responses. Similarly, in e-commerce search systems [2], mastery of domainspecific terminology is crucial for generating relevant results and recommendations. This requirement for specialized domain competency extends equally to healthcare applications [3]: large language models need to accurately understand medical terminology, diagnoses, treatment plans, and drug interactions. More notably, applications such as biomedical question answering and medical report summarization significantly rely on knowledge drawn from external sources of professional medical literature [4]. Training in aviation theory involves extensive specialized terminology and low-frequency technical vocabulary, which are underrepresented in generic training data, leading to challenges in model comprehension and application. Furthermore, LLMs trained on open-source internet data inevitably inherit biases and inaccuracies. Although effective for general tasks, such data biases may propagate incorrect or unsafe outputs in high-stakes domains like pilot training. Moreover, since most pre-training data are several years old, they cannot meet the timeliness requirements of the aviation training field. Thus, adapting general-domain pre-trained models to aviation-specific scenarios while ensuring output accuracy and timeliness remains a critical challenge. This paper proposes RALA-DPO for constructing an aviation theory training LLM by integrating pre-trained models, DPO fine-tuning, and RAG.We constructs a DPO dataset in the aviation training domain for training large language models.Secondly, the DPO fine-tuning technique is introduced to improve the accuracy of LLM responses. Furthermore, the RAG technique is adopted to combine the fine-tuned generative model with the retrieval model, effectively realizing the retrieval of relevant information from external knowledge bases and providing accurate and high-quality answers based on the generative model. # 2 Related work The field of aviation theory training imposes stringent requirements for safety, efficiency, and standardized operations. Traditional aviation training relies on standardized teaching materials, simulator training, and the imparting of human experience. Derin [5] designed and implemented a simulation-based aviation English course to improve learning gains. However, when addressing decisionmaking in emergency scenarios, dynamic knowledge updates, and personalized learning needs, there are often problems such as limited teaching resources and outdated knowledge. In recent years, numerous studies have attempted to apply new technologies to aviation. Sihai Li [6] designed an aircraft maintenance training platform based on virtual reality technology to enhance the efficiency and quality of training for maintenance personnel. Recent advances in Large Language Models (LLMs) have established the technological foundation for their application in specialized domains. Zhang [7] fine-tuned the models using two financial datasets, allowing them to focus primarily on financial classification tasks. Significant progress has also been made in education and interactive teaching through LLMs. For example, CyberQ [8] integrates AISecKG [9] to combine static knowledge embedding with dynamic knowledge injection. Using structured knowledge representation and real-time updates, it generates intelligent question-and-answer outputs aligned with cybersecurity best practices. The SocraticLM model [10], fine-tuned on the SocraticTeach dataset, enhances student guidance in critical thinking and problemsolving. Additionally, the KnowGPT framework [11] converts diverse prompt templates into natural language prompts interpretable and usable by language models. This improves LLM performance in domain-specific tasks while significantly reducing API costs. Furthermore, Qwen [12] achieves comprehensive advances in long-text processing and multi-task capabilities through core innovations like its Dual Chunk Attention (DCA) mechanism and Mixture-of-Experts (MoE) architecture. These abilities—including in-context learning and multimodal fusion—now offer methodological support for complex knowledge reasoning in specialized fields such as aerospace. While pre-trained LLMs exhibit general language understanding capabilities, their outputs may not satisfy domain-specific requirements. Through fine-tuning, models can assimilate domain knowledge and adjust generation styles, thereby significantly enhancing task accuracy and consistency. Ouyang [13] transformed general-purpose LLMs into instruction-following conversational agents via multiround supervised fine-tuning (SFT) in InstructGPT. Their key innovation was constructing high-quality datasets of instruction-response pairs to align model outputs with human expectations. However, SFT’s reliance on annotated data struggles to resolve complex preference conflicts. Yao [14] proposed chain-ofthought prompting, which introduces intermediate reasoning steps to help models decompose complex tasks into manageable components. This approach enables LLMs to utilize their internal knowledge more effectively through explicit step-by-step reasoning, improving performance on tasks requiring logical inference, multi-step computation, or decision-making. OpenAI’s Reinforcement Learning from Human Feedback (RLHF) framework [15] achieved value alignment through reward modeling and proximal policy optimization but faced challenges of training instability and high computational costs. Addressing this, Rafailov [16] introduced DPO. DPO reformulates preference learning into singlestage loss function optimization via implicit reward modeling, mathematically proving equivalence to RLHF while reducing training costs by $7 5 \%$ . Meanwhile, large language models may produce erroneous outputs or hallucinations due to training data biases, knowledge obsolescence, or insufficient domain-specific knowledge. RAG technology effectively mitigates these hallucinations and knowledge latency issues through synergistic optimization between external knowledge bases and generative models. Recent advancements in RAG technology have progressed along two main trajectories: retrieval quality enhancement and knowledge-generation alignment. Representative works include HyDE [17], which improves query reformulation through hypothetical answer generation, and Atlas [18], which employs contrastive learning to align the semantic spaces of retrievers and generators. For vertical domain applications, Self-RAG [19] implements dynamic retrieval timing decisions using reflection tokens, while RA-DIT [20] enhances domain adaptability via a two-stage adaptation process involving domain knowledge infusion followed by instruction tuning. Ren [21] proposes a RAG-aided Causal Identification (RACI) model that integrates the large language model approach. However, using DPO fine-tuning or RAG alone cannot simultaneously meet the aviation theory training field’s requirements for accuracy and timeliness in generated content. Therefore, this paper proposes a construction framework called RALA-DPO for an aviation theory training knowledge large model. This framework improves answer accuracy through DPO fine-tuning and prioritizes retrieving the latest authoritative documents before generating answers via RAG. This ensures that model outputs are always based on current valid knowledge, thereby guaranteeing timeliness. # 3 Background The Qwen model series has made remarkable progress in architectural design, training strategies, and performance optimization, providing a strong technical foundation for fine-tuning in professional domain question-answering tasks. The Qwen2.5 [12] series addresses the positional encoding bottleneck of traditional Transformer models in long-text processing through Dual Chunk Attention (DCA) technology. DCA divides long sequences into chunks and remaps relative positional relationships through a dual-chunk attention mechanism, enabling the model to efficiently handle hundreds of thousands of tokens of context. For example, the Qwen2.5-7B-Instruct-1M model only needs to be trained at a length of 32K to extrapolate to 1M length tasks and achieves nearly $1 0 0 \%$ accuracy in complex tasks such as key retrieval. In addition, the introduction of sparse attention optimization significantly improves inference speed, with speeds increasing by 3.2-6.7 times when supporting long-text inputs, making it suitable for long-document analysis needs in professional fields. The pre-training phase of Qwen2.5 [22] adopts a hierarchical optimization strategy. Firstly, it utilizes 18 trillion high-quality tokens of pre-training data (157% more than the previous generation), covering professional fields such as mathematics and programming, and employs synthetic data generation technology to enhance data diversity. In long-text processing, the model adopts progressive context expansion training: starting with a sequence length of 4K, it gradually expands to 1M tokens, and the DCA technique reduces time complexity, solving the memory bottleneck of traditional Transformer architectures for long sequences. In the post-training phase, multi-stage reinforcement learning (DPO and GRPO algorithms) is combined to optimize capabilities such as instruction following and logical reasoning, enhancing output stability. Through strict data filtering and mixed ratio optimization, the model shows significant improvements in tasks such as commonsense reasoning and logical deduction. This feature provides a rich knowledge base for fine-tuning in professional domains. For this reason, this paper selects the Qwen model as the base model for fine-tuning to meet the requirements of aviation theory training field for generated content. # 4 Methodology To address the issues of insufficient accuracy and timeliness of LLMs in the field of aviation theory training, this paper proposes RALA-DPO. We employ DPO model fine-tuning technology to refine Qwen, and integrate RAG to develop an aviation training-oriented LLM. DPO constrains the model to prefer professional responses in terms of generation preferences, ensuring the accuracy of model output, while RAG ensures the timeliness of information from the knowledge source. The collaboration of the two significantly enhances the professionalism, and timeliness of the output. The RALA-DPO is illustrated in the figure 1. # 4.1 Model Refinement Through Direct Preference Optimization This paper selects open-source pre-trained large language model Qwen and adapts it to the specialized domain of aviation industry professional training through efficient fine-tuning techniques. Specifically, we employ Direct Preference Optimization for model refinement. DPO directly optimizes language model policies using human preference data to align model generation behaviors, while simultaneously mitigating hallucination issues in generated content. This approach enables more efficient and targeted processing of preference data, ensuring enhanced alignment with human evaluators’ priorities in generated outputs. By eliminating the need for explicit reward modeling and iterative RL-based policy updates, DPO achieves superior training stability and computational efficiency compared to conventional PPO-based frameworks, while maintaining precise control over domain-specific response quality and factual accuracy required for aviation theory training scenarios. Fig. 1. When a user submits a query, the system first performs semantic embedding on the query to generate a corresponding semantic embedding vector. Subsequently, cosine similarity is used to calculate the vector’s similarity to vectors in the vector database. This identifies the context most relevant to the query. These contexts are then substituted into a predefined prompt template. Finally, the augmented prompt, along with the original query, is input into a generative model trained using DPO to produce the reply. For each input prompt $x$ , we establish a pairwise comparison scenario containing two distinct outputs: a preferred response $y _ { w }$ and a dispreferred response $y _ { l }$ . According to the Bradley-Terry model, the preference probability $P$ can be formulated as: $$ P ( y _ { w } \succ y _ { l } | x ) = \frac { \exp ( R ( x , y _ { w } ) ) } { \exp ( R ( x , y _ { w } ) ) + \exp ( R ( x , y _ { l } ) ) } , $$ where the reward function $R ( x , y )$ can be analytically expressed through its policy model $\pi$ and reference model $\pi _ { \mathrm { r e f } }$ , thereby enabling direct optimization of the policy model on preference data under proper parametrization. Specifically, the reward function can be formulated as: $$ R ( x , y ) = \beta \log \frac { \pi ( y | x ) } { \pi _ { \mathrm { r e f } } ( y | x ) } + \beta \log Z ( x ) . $$ In the formulation, $\beta$ serves as the temperature parameter controlling the degree of policy deviation from the reference model, while $Z ( x )$ acts as a normalization constant to ensure training stability. Building on this theoretical insight, the policy model can be directly optimized using human feedback data. By substituting the reward expression into the Bradley-Terry preference probability equation, we derive the simplified preference likelihood formulation: $$ P ( y _ { w } \succ y _ { l } | x ) = \frac { 1 } { 1 + \exp \left( \beta \log \frac { \pi ( y _ { l } | x ) } { \pi _ { \mathrm { r e f } } ( y _ { l } | x ) } - \beta \log \frac { \pi ( y _ { w } | x ) } { \pi _ { \mathrm { r e f } } ( y _ { w } | x ) } \right) } . $$ Given the constructed preference data set $\mathcal { D } = \{ ( x _ { i } , y _ { w } ^ { ( i ) } , y _ { l } ^ { ( i ) } ) \} _ { i = 1 } ^ { N }$ , we optimize the policy parameters by maximizing the log-likelihood of observed preferences. $$ \mathcal { L } ( \pi ) = \sum _ { i = 1 } ^ { N } \log P ( y _ { w } ^ { ( i ) } \succ y _ { l } ^ { ( i ) } | x _ { i } ) . $$ Substituting the simplified preference probability expression into the objective function, we derive the following DPO loss function: $$ \mathcal { L } ( \pi ) = \sum _ { i = 1 } ^ { N } \log \sigma \left( \beta \log \frac { \pi ( y _ { w } ^ { ( i ) } | x _ { i } ) } { \pi _ { \mathrm { r e f } } ( y _ { w } ^ { ( i ) } | x _ { i } ) } - \beta \log \frac { \pi ( y _ { l } ^ { ( i ) } | x _ { i } ) } { \pi _ { \mathrm { r e f } } ( y _ { l } ^ { ( i ) } | x _ { i } ) } \right) , $$ where $\sigma$ denotes the logistic sigmoid function. This maximization objective is equivalently reformulated as minimizing the negative log-likelihood: $$ \begin{array} { r } { \mathcal { L } _ { D P O } = - \mathbb { E } _ { ( x _ { i } , y _ { w } , y _ { l } ) \sim D } \left[ \log \sigma \left( \beta \log \frac { \pi ( y _ { w } | x _ { i } ) } { \pi _ { \mathrm { r e f } } ( y _ { w } | x _ { i } ) } - \beta \log \frac { \pi ( y _ { l } | x _ { i } ) } { \pi _ { \mathrm { r e f } } ( y _ { l } | x _ { i } ) } \right) \right] . } \end{array} $$ The gradient of the loss function simultaneously adjusts the probability distributions of both preferred responses $y _ { w }$ and dispreferred responses $y _ { l }$ through the following mechanism: $$ \nabla _ { \theta } \mathcal { L } \propto - \beta \left( \frac { \pi ( y _ { w } | x ) } { \pi _ { \mathrm { r e f } } ( y _ { w } | x ) } \nabla _ { \theta } \log \pi ( y _ { w } | x ) - \frac { \pi ( y _ { l } | x ) } { \pi _ { \mathrm { r e f } } ( y _ { l } | x ) } \nabla _ { \theta } \log \pi ( y _ { l } | x ) \right) . $$ Through this dual-ascent optimization of the DPO loss, the model directly internalizes human preference patterns while maintaining fundamental language competencies preserved in $\pi _ { \mathrm { r e f } }$ The constrained update mechanism ensures that preference alignment occurs within a trust region of the reference policy, effectively balancing three critical objectives: Maximizing human preference satisfaction, Preserving linguistic coherence and domain knowledge and Minimizing hallucination through reference policy anchoring. Through optimization of this loss function, the model directly learns to generate preferred responses through dual mechanisms of preference maximization and dispreference minimization. # 4.2 Optimization Strategies for RAG in Aviation Theory Training Domain Given the critical demands for timeliness in aviation theory training knowledge, this paper employs RAG technology that integrates generative models with retrieval mechanisms. The approach effectively retrieves relevant information from scalable external knowledge bases and delivers precise, high-quality responses through generative models, thereby addressing the limitations of conventional generative models in information reliability and knowledge coverage. This methodology enhances the reliability of large language model outputs and response accuracy while providing verifiable reference sources, achieving a substantial reduction in hallucination issues inherent to large language models. First, we establish an aviation theory training knowledge base by ingesting extensive unstructured domain-specific data, including the Flight Instructor Theoretical Manual and the Basic Rules of Flight of the People’s Republic of China. After these data are processed into standard text data, they are segmented into multiple knowledge fragments. A semantic embedding vector representing the semantics of each fragment is generated through a text semantic embedding model to form a collection $\mathcal { D } = \{ d _ { 1 } , d _ { 2 } , \dots , d _ { n } \}$ for retrieval. The text semantic embedding model is a type of text processing model in the field of natural language processing for the specific task of text semantic extraction and compression. It compresses words, phrases, sentences, or even documents in the text into a high-dimensional embedding vector through the Transformer encoding layer. The obtained embedding vector is a compressed representation of the complete text semantics, which is easy to store and retrieve efficiently. When storing text content, this embedding vector is stored in the database together with relevant data of the original text as an index for subsequent retrieval. During the retrieval phase, first perform semantic embedding on the aviation training-related questions raised by users to generate a semantic embedding vector $q$ for the input question. Perform relevance calculation with key documents in the knowledge library through cosine similarity calculation: $$ s ( q , d ) = { \frac { \operatorname { E m b } _ { R } ( q ) \cdot \operatorname { E m b } _ { R } ( d ) } { \parallel \operatorname { E m b } _ { R } ( q ) \parallel \cdot \parallel \operatorname { E m b } _ { R } ( d ) \parallel } } , $$ where, $\operatorname { E m b } _ { R } ( \cdot )$ is the vector representation generated by the retrieval model.Retrieve the top $k$ documents that are most relevant to query $q$ from the document collection $\mathcal { D } = \{ d _ { 1 } , d _ { 2 } , \ldots , d _ { k } \}$ . Then input these knowledge text fragments into the prompt template together with the question and feed them into the generation model to obtain Output $a$ . $$ a = \mathrm { L M } ( \mathrm { c o n c a t } ( q , D ) ) , $$ where, $\operatorname { L M } ( \cdot )$ is the generation model. Under the RAG framework, the language model leverages its advanced generative capabilities to synthesize contextually grounded responses by jointly analyzing both the user’s query and retrieved contextual information. By implementing Retrieval-Augmented Generation technology, the model can generate accurate and timely content, particularly crucial for aviation professionals’ theoretical training where regulatory documents frequently undergo updates. The dynamic updatability of the knowledge base enables the model to utilize the latest aviation theory training information without requiring retraining, while simultaneously mitigating hallucination issues inherent in large language models. # 5 Experiments In this section, we outline our experimental workflow, which includes data processing, validation of the DPO fine-tuning method, and validation of the impact of RAG technology on the models. # 5.1 Construction of the Dataset Given the stringent requirements for professional knowledge accuracy and timeliness in aviation theory training. This study systematically aggregates multidomain aviation training data spanning foundational aviation theory, meteorological systems, aerodynamics, visual flight rules, and civil aviation regulations. The curated data derives from three authoritative sources: 1) certified training materials and official textbooks, 2) peer-reviewed academic literature and technical publications, and 3) regulatory documentation from the International Civil Aviation Organization (ICAO) and national aviation authorities, ensuring both provenance verification and content precision. The data processing phase implements rigorous cleaning, structural organization, and categorical classification protocols to guarantee information integrity. Aviation knowledge texts undergo expert-guided manual annotation, constructing a premium dataset that encapsulates all critical knowledge components, resulting in 9,740 curated data pairs designated as preferred model responses. Concurrently, we generate auxiliary coarse-grained responses for identical queries using untrained large language models, establishing a dual-response framework. This comparative architecture enables systematic output optimization through response alignment analysis, ensuring strict compliance with aviation standard operating procedures. # 5.2 Experimental Setup This study uses the open-source pre-trained model Qwen2.5-14B as its base model. Comparative experiments employing Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) techniques were conducted to evaluate their respective effectiveness. Both methods used a learning rate of 0.0003 and a batch size of 16, training them on the ATDS dataset. For model evaluation, this paper utilizes the AI evaluator Themis-turbo from Alibaba Cloud’s Bailian platform to assess two models, with the evaluation dataset derived from random sampling of ATDS. The configuration of this AI evaluator is set as follows: Models are scored across six dimensions—accuracy, relevance, completeness, source reliability, clarity of answer structure, and timeliness—using a 5-tier rating system (the more the response aligns with the scoring criteria, the higher the score). To validate Retrieval-Augmented Generation (RAG) effectiveness, we implemented the expert evaluation method from OpsEval [23], using human experts to score output fluency, accuracy, and timeliness on a 0-5 scale. Four model variants were tested:SFT-fine-tuned Qwen2.5-14B, DPO-fine-tuned Qwen2.5- 14B, SFT $^ +$ RAG-enhanced Qwen2.5-14B and DPO $^ +$ RAG-enhanced Qwen2.5- 14B. Aviation training domain experts scored 100 sample question-answer pairs from each model configuration, with performance quality positively correlating to higher numerical ratings across all metrics. # 5.3 Experiment Results Analysis The comparative results of models trained with SFT and DPO, evaluated by the AI evaluator Themis-turbo, are shown in Table 1, while the scores assigned by aviation theory training experts to the four models are presented in Table 2. Table 1. Results of Model Comparison Table 1 compares the performance of Qwen2.5-14B fine-tuned with DPO versus SFT on the ATDS evaluation set. The results demonstrate that the DPOtuned Qwen2.5-14B outperforms its SFT-tuned counterpart in aviation theory training tasks. This indicates that the DPO fine-tuning technique achieves superior performance across all six evaluation dimensions—accuracy, relevance, completeness, source reliability, clarity of answer structure, and timeliness—compared to SFT fine-tuning. Evaluated by Alibaba Cloud’s Bailian platform AI evaluator Themis-turbo, the DPO-tuned model exhibits a 14% higher win rate than the SFT-tuned model. Table 2. Expert evaluation results In the Table 2, SFT indicates SFT-fine-tuned Qwen2.5-14B, DPO indicates DPO-fine-tuned Qwen2.5-14B, SFT $^ +$ RAG indicates SFT $^ +$ RAG-enhanced Qwen 2.5-14B, and DPO+RAG indicates DPO $^ +$ RAG-enhanced Qwen2.5-14B. As can be seen from the table, the fluency, accuracy and timeliness of the models finetuned by DPO are obviously superior to those fine-tuned by SFT, which indicates that the answers generated by the models fine-tuned by DPO are more in line with the preferences of experts than those obtained by the models fine-tuned by SFT. At the same time, the model using RAG technology is significantly better than the model without RAG technology, which indicates that the use of RAG technology can enhance the understanding and generation ability of the large model to the business question and answer data in the professional field, and generate more smooth and accurate answers with stronger timeliness.
Aviation training is a core link in ensuring flight safety, improving industry efficiency and promoting sustainable development. It not only involves flight simulation but also requires the learning of a great deal of professional aviation theory knowledge. In the existing training system, the knowledge is mainly imparted by the the instructors. However, the number of instructors is limited and the professional answers obtained from the Internet are not accurate enough, resulting in low training efficiency. To address this, we introduced LLM, but the basic pre-trained model cannot provide accurate answers to professional fields, so we fine-tuned it. Traditional Supervised Fine-Tuning (SFT) risk generating superficially plausible but factually incorrect responses due to insufficient data coverage. To address this, we employ Direct Preference Optimization(DPO). This paper proposes Retrieval-Augmented LLM Alignment via Direct Preference Optimization(RALA-DPO). We select open source pre-trained LLM Qwen and adapt it to aviation theory training through DPO-based domain alignment. Simultaneously, to mitigate hallucinations caused by training data biases, knowledge obsolescence, or domain knowledge gaps, we implement Retrieval-Augmented Generation(RAG) technology that combines generative and retrieval models. RALA-DPO effectively retrieves relevant information from external knowledge bases and delivers precise and high-quality responses through the generative model. Experimental results demonstrate that RALA-DPO can improve accuracy in response to professional aviation knowledge. With integrated RAG mechanisms, this system can further improve the accuracy of answers and achieve zero-cost knowledge updates simultaneously.
[ "cs.AI" ]
# 1 Introduction Multimodal Large Language Models (MLLMs)[22, 30, 1, 8] excel at cross-modal tasks such as image captioning[19] and visual question answering [3, 39]. Owing to their high computational cost, they are typically offered via cloud service (e.g. GPT-4o [22], Gemini [30]). This setup, while convenient, exposes shared resources to abuse. Malicious users can craft adversarial inputs that trigger excessive computation or unusually long outputs. Such inference-time amplification attacks consume disproportionate resources, degrade service quality, and may even lead to denial-of-service (DoS)[15, 40, 18, 13] (see Figure 1). Existing energy-latency attacks on MLLMs [14] typically attempt to suppress the End-of-Sequence (EOS) token by applying uniform pressure across all output tokens, irrespective of token type or position. However, 8 ? AQ : 8 A A API ServiceProvider 盘 ? ABCABCABC. WQ. 8 Q :1 Server Busy API Service Provider this strategy proves only marginally effective in increasing resource consumption. We attribute the limited efficacy of these existing approaches to two primary factors: 1) Our experimental analysis reveals that different Part-of-Speech (POS) tokens exhibit distinct propensities to trigger the EOS token. For instance, Figure 3 demonstrates that punctuation is notably more likely to be followed by EOS compared to tokens like adjectives or progressive verbs. A uniform suppression strategy used in prior works [14], however, disregards these crucial token-specific variations. Consequently, it applies pressure inefficiently to positions unlikely to terminate the sequence. This oversight leads to suboptimal optimization and, ultimately, reduced attack effectiveness. 2) Current methods often overlook the impact of sentence-level structural patterns on generation token counts. For instance, inducing repetitive patterns—a common tactic that significantly inflates resource usage, which is not explicitly leveraged by existing attack frameworks. To address the aforementioned limitations and efficiently induce prolonged and repetitive outputs from MLLMs, we propose LingoLoop Attack. First, building upon our analysis that different POS tokens exhibit distinct propensities to trigger the EOS token, we developed the POS-Aware Delay Mechanism. This mechanism constructs a POS-aware prior probability model by statistically analyzing the correlation between part-of-speech tags and EOS token prediction probabilities across large-scale data. Then, leveraging these estimated prior probabilities, the mechanism dynamically adjusts postpone EOS token generation by adjusting attention weights guided by POS information. Second, we propose a Generative Path Pruning Mechanism to systematically induce repetitive generation and maximize output length. Our design is motivated by empirical analysis of hidden state dynamics, which reveals that repetitive outputs consistently correlate with low-variance regions in the model’s latent space. The mechanism operates by actively constraining the $L _ { 2 }$ norm of hidden states at each decoding step, deliberately compressing the model’s trajectory into a restricted subspace. This strategic limitation of the hidden state manifold progressively reduces output diversity, forcing the model into a stable loop. Through this controlled degradation of generation diversity, we effectively establish and maintain a persistent looping state that amplifies output length. By integrating these two mechanisms, LingoLoop Attack effectively delays sequence termination while simultaneously guiding the model into repetitive generation patterns, our main contributions can be summarized as follows: • We analyze MLLMs internal behaviors, showing: 1) the significant influence of a preceding token’s Part-of-Speech tag on the probability of the next token being an EOS token, and 2) a strong correlation between hidden state statistical properties and the emergence of output looping. This analysis reveals critical limitations in prior verbose attack strategies. • We propose the LingoLoop Attack, a synergistic two-component methodology designed to exploit these findings, featuring: 1) POS-Aware Delay Mechanism for context-aware termination delay, and 2) Generative Path Pruning Mechanism to actively induce repetitive, high-token-count looping patterns. • Extensive experiments show our method achieves extreme verbosity, generating up to $3 0 \times$ more tokens and consuming $\mathbf { 3 0 \times }$ more energy than clean inputs, as shown in Table 3, significantly surpassing previous attacks in exposing MLLMs resource exhaustion risks. # 2 Related Work Multimodal Large Language Models. Multimodal Large Language Models (MLLMs) extend a powerful extension of traditional Large Language Models (LLMs), integrating visual perception capabilities [37, 34, 23]. These models typically comprise a vision encoder to interpret images, a core LLM for reasoning and language tasks, and an alignment module connecting the two modalities. The design of this connection and the overall architecture influences model behavior and efficiency. For example, architectures like InstructBLIP [11] employ sophisticated mechanisms, such as an instruction-guided Querying Transformer, to dynamically focus visual feature extraction based on textual context. More recent developments, represented by the Qwen2.5-VL series [1] (including the 3B and 7B variants central to our study), build upon dedicated LLM foundations like Qwen2.5 [38]. They incorporate optimized vision transformers, featuring techniques like window attention and efficient MLP-based merging, aiming for strong performance in fine-grained visual understanding and document analysis across model scales. Another advanced architecture, InternVL3-8B [8, 7], employs Native Multimodal Pre-Training with V2PE [17] for long contexts and MPO [35] for reasoning optimization. Evaluating these approaches is crucial for understanding their operational characteristics, particularly energy consumption under adversarial conditions. Energy-latency Attack. Energy-latency attacks (also known as sponge attacks) [33] aim to maximize inference time or energy consumption via malicious inputs, thereby threatening system availability [33, 21, 26, 20, 32, 29, 24, 6]. These attacks typically exploit the efficiency optimization mechanisms inherent in models or hardware, potentially leading to Denial-of-Service (DoS) conditions, inflated operational costs, or rapid battery depletion on edge devices [33, 10, 36]. For instance, early work targeted fundamental optimization principles by constructing inputs designed to minimize activation sparsity in CNNs [33, 25] or maximize the number of internal operations within Transformer models [33]. These ideas were later extended to traditional image captioning systems (e.g., CNN-RNN models), with attacks like NICGSlowDown [5] manipulating image inputs to force longer decoding sequences. In the domain of text generation, NMTSloth [4] targeted neural machine translation models, crafting prompts that prolonged generation and increased computation. As LLMs became dominant, prompt-level attacks such as P-DOS [16] were proposed to exploit their autoregressive decoding behavior. With the advent of MLLMs, research has begun to explore energy-latency attacks targeting these novel architectures. Gao et al. [14] recently proposed the Verbose Images method. This technique introduces imperceptible perturbations to the input image, inducing the MLLMs to generate lengthy textual descriptions, which in turn significantly increases the model’s inference costs. However, it overlooks how part-of-speech information influences the likelihood of generating an EOS token, limiting its ability to fully exploit linguistic cues for prolonged generation. # 3 Methodology # 3.1 Preliminaries Our primary objective is to design an adversarial attack targeting MLLMs. The attacker aims to craft an adversarial image $\mathbf { x } ^ { \prime }$ from an original image $\mathbf { x }$ and a given input prompt $c _ { \mathrm { i n } }$ . This adversarial image $\mathbf { x } ^ { \prime }$ should induce the MLLMs to generate a highly verbose or even repetitive output sequence $\mathbf { y } = \{ y _ { 1 } , y _ { 2 } , \dots , y _ { N _ { \mathrm { o u t } } } \}$ . The generation of each token $y _ { j }$ is associated with an output probability distribution $f _ { j } ( \mathbf { x } ^ { \prime } )$ , an EOS probability $f _ { j } ^ { \mathrm { E O S } } ( \mathbf { x } ^ { \prime } )$ , and a set of hidden state vectors across $L$ model layers, $h _ { j } ( { \bf x } ^ { \prime } ) = \{ h _ { j } ^ { ( l ) } ( { \bf x } ^ { \prime } ) \} _ { l = 1 } ^ { L }$ . The attacker operates under a white-box scenario, possessing full knowledge of the target MLLM’s architecture, parameters, and gradients. This enables the use of gradient-based methods to optimize the adversarial perturbation. The adversarial image $\mathbf { x } ^ { \prime }$ is constrained by an $l _ { p }$ -norm bound: $$ \| \mathbf { x } ^ { \prime } - \mathbf { x } \| _ { p } \leq \epsilon , $$ where $\epsilon$ is the perturbation budget. Given the strong correlation between MLLMs’ computational costs (e.g., energy consumption and latency) and the number of output tokens, the attacker’s ultimate goal is to maximize the length of the generated token sequence, $N _ { \mathrm { o u t } } ( \mathbf { x } ^ { \prime } )$ . This can effectively degrade or even paralyze MLLMs services. Formally, the attacker’s objective is: $$ \operatorname* { m a x } _ { \mathbf { x } ^ { \prime } } N _ { \mathrm { o u t } } ( \mathbf { x } ^ { \prime } ) , $$ subject to the constraint in Equation (1). To maximize the number of output tokens produced by MLLMs from adversarial images $\mathbf { x } ^ { \prime }$ , we introduce the LingoLoop Attack. This methodology counteracts natural termination and manipulates state evolution to promote sustained, high-volume token generation. It synergistically combines two primary components: 1) POS-Aware Delay Mechanism, as detailed in Sec.3.2, and 2) Generative Path Pruning Mechanism (Sec. 3.3), which induces looping by constraining hidden state magnitudes to guide the model towards repetitive, high-volume outputs. These components are integrated into a weighted objective function, and the overall optimization approach is detailed in Sec.3.4. Figure 2 presents the framework of our LingoLoop Attack. # 3.2 POS-Aware Delay Mechanism A key challenge in prolonging MLLMs generation is their natural termination behavior, where the model predicts an EOS token based on linguistic cues in the preceding context. While prior work [14] attempted to delay termination by uniformly suppressing EOS probabilities, our analysis (see Figure 3) reveals that EOS predictions are strongly correlated with the POS tag of the preceding token. This motivates our POS-Aware Delay Mechanism, which dynamically suppresses EOS token probabilities based on linguistic priors derived from POS statistics. Figure 2: Overview of the LingoLoop Attack framework. This two-stage attack first employs a POS-Aware Delay Mechanism that leverages linguistic priors from Part-of-Speech tags to suppress premature sequence termination. Subsequently, the Generative Path Pruning Mechanism constrains hidden state representations to induce sustained, high-volume looping outputs. When processing an adversarial image $\mathbf { x } ^ { \prime }$ and prompt $c _ { \mathrm { i n } }$ , the MLLMs auto-regressively generates an output token sequence $\mathbf { y } = \{ y _ { 1 } , \dots , y _ { N _ { \mathrm { o u t } } } \}$ . For each $i$ -th token $y _ { i }$ in this generated sequence (where $i$ ranges from 1 to $N _ { \mathrm { o u t } } )$ , the model provides the corresponding logits vector ${ \bf z } _ { i } ( { \bf x } ^ { \prime } )$ . The EOS probability for this step, $f _ { i } ^ { \mathrm { E O S } } ( \mathbf { x } ^ { \prime } )$ , is then derived from these logits: $$ \begin{array} { r } { ( \mathbf { y } , \{ \mathbf { z } _ { j } ( \mathbf { x } ^ { \prime } ) \} _ { j = 1 } ^ { N _ { \mathrm { o u t } } } ) = \mathbf { M } \mathbf { L } \mathbf { L } \mathbf { M } ( \mathbf { x } ^ { \prime } , c _ { \mathrm { i n } } ) ; \quad f _ { i } ^ { \mathrm { E O S } } ( \mathbf { x } ^ { \prime } ) = ( \mathrm { s o f t m a x } ( \mathbf { z } _ { i } ( \mathbf { x } ^ { \prime } ) ) ) _ { \mathrm { E O S } } . } \end{array} $$ Subsequently, for each $i$ -th newly generated token $y _ { i }$ in the output sequence y (where $i$ ranges from 1 to $N _ { \mathrm { o u t } } )$ , we determine the POS tag of its predecessor token, $y _ { i - 1 }$ . For $i = 1$ , the predecessor $y _ { 0 }$ is taken as the last token in $c _ { \mathrm { i n } }$ . For all subsequent tokens $( i > 1 )$ ), $y _ { i - 1 }$ is the actual $( i - 1 )$ -th token from the generated sequence y. The POS tag $t _ { i - 1 }$ is then obtained as: $$ t _ { i - 1 } = \mathrm { P O S } ( y _ { i - 1 } ) . $$ This POS tag $t _ { i - 1 }$ is then used to query our pre-constructed Statistical Weight Pool, which encodes linguistic priors for EOS prediction conditioned on POS tags. Specifically, for each Part-of-Speech tag $t$ , the pool stores an empirical prior $\bar { P } _ { \mathrm { E O S } } ( t )$ , representing the average probability that the model predicts an EOS token immediately after generating a token with POS tag $t$ . To estimate these priors, we input a large collection of images (e.g., from ImageNet [12] and MSCOCO [27]) into the MLLMs and collect its generated output sequences. For each generated token, we extract the EOS probability predicted at the next time step, and categorize these values by the POS tag of the current token. The average of these grouped EOS probabilities yields the final value of $\bar { P } _ { \mathrm { E O S } } ( \breve { t } )$ . A weight $w _ { i }$ for the $i$ -th generation step is then computed from the linguistic prior associated with the preceding POS tag, $\mathbb { \bar { P } } _ { \mathrm { E O S } } ( t _ { i - 1 } )$ , using a predefined weighting function $\phi _ { w }$ : $$ w _ { i } = \phi _ { w } ( \bar { P } _ { \mathrm { E O S } } ( t _ { i - 1 } ) ; \pmb { \theta } _ { w } ) , $$ where $\theta _ { w }$ represents a set of parameters governing the transformation from prior probabilities to weights. This function $\phi _ { w }$ is designed such that the resulting weight $w _ { i }$ is typically larger when the linguistic prior $\bar { P } _ { \mathrm { E O S } } ( t _ { i - 1 } )$ is higher, signifying that the preceding POS tag $\cdot _ { t _ { i - 1 } } ,$ is statistically more likely to be followed by an EOS. Furthermore, the resulting weights are often normalized (e.g., to the range $[ 0 , 1 ] )$ for stable optimization. Thus, POS tags indicating a higher natural likelihood of termination will correspond to a larger $w _ { i }$ , focusing suppressive attention in the loss function. Finally, to actively suppress premature termination, we define the Linguistic Prior Suppression loss $( \mathcal { L } _ { \mathrm { L P S } } )$ . This loss is a key component of the POS-Aware Delay Mechanism (Figure 2). It aims to reduce the EOS probability, particularly in contexts identified by $w _ { i }$ as linguistically prone to termination: $$ \mathcal { L } _ { \mathrm { L P S } } ( \mathbf { x } ^ { \prime } ) = \frac { 1 } { N _ { \mathrm { o u t } } } \sum _ { i = 1 } ^ { N _ { \mathrm { o u t } } } \left( w _ { i } \cdot f _ { i } ^ { \mathrm { E O S } } ( \mathbf { x } ^ { \prime } ) \right) . $$ Figure 3: Statistical analysis of the Qwen2.5-VL-3B-Instruct model showing the varying probability of generating an EOS token based on the preceding token’s POS tag. Bar color indicates the relative frequency of each POS tag in the analysis dataset. By minimizing $\mathcal { L } _ { \mathrm { L P S } }$ (Equation 6) through adversarial optimization of $\mathbf { x } ^ { \prime }$ , the suppressive gradient signal on $f _ { i } ^ { \mathrm { E O \bar { S } } } ( { \bf x } ^ { \prime } )$ is adaptively scaled by $w _ { i }$ , resulting in stronger inhibition in linguistically termination-prone contexts. This targeted suppression discourages premature sequence termination in linguistically-cued situations, thereby robustly prolonging the output. # 3.3 Generative Path Pruning Mechanism While suppressing early EOS predictions (via $\mathcal { L } _ { \mathrm { L P S } }$ , Sec. 3.2) is effective in prolonging generation, we observe that achieving truly extreme output lengths often relies on a different dynamic: inducing the model into a repetitive or looping state. A model trapped in such a loop will continue emitting tokens until external termination limits are reached. However, MLLMs are inherently biased toward diverse and coherent generation, driven by continuous evolution in their internal representations. This dynamic evolution naturally resists the kind of hidden state stagnation that underlies repetitive outputs. To counter this, we introduce Generative Path Pruning—a mechanism that disrupts representational diversity and guides the model toward convergence in hidden state space. This effectively restricts the exploration of novel generative trajectories and biases the model toward repetitive, high-volume outputs. Our analysis shows that adversarial samples achieving maximal verbosity frequently exhibit state-space collapse, where hidden representations converge to a narrow subregion, reducing contextual variance and encouraging repetition. To validate this, we conduct a batch-level mixing experiment: each batch contains $B$ images, initially with $M _ { \mathrm { c l e a n } }$ clean images and $M _ { \mathrm { a d v } }$ adversarial, loop-inducing images such that $M _ { \mathrm { c l e a n } } + M _ { \mathrm { a d v } } = B$ . We progressively vary $M _ { \mathrm { a d v } }$ (e.g., $M _ { \mathrm { a d v } } = 2 , 4 , \ldots )$ to study the impact of adversarial proportion. As shown in Figure 4(a), increasing $M _ { \mathrm { a d v } }$ consistently reduces both the mean and variance of hidden state $L _ { 2 }$ norms. Meanwhile, Figure 4(b) shows a corresponding increase in output length and repetition metrics. This inverse correlation between hidden state dispersion and verbosity supports our hypothesis that constraining internal diversity promotes looping. To implement our Generative Path Pruning strategy, we propose the Repetition Promotion Loss $( \mathcal { L } _ { \mathrm { R e p } } )$ , which encourages repetitive generation by directly penalizing the magnitudes of hidden states corresponding to generated output tokens. By promoting contraction of these representations, the model’s internal dynamics become less diverse, fostering looping behavior and pruning away divergent generative paths. The loss is controlled by a hyperparameter $\lambda _ { \mathrm { { r e p } } }$ . For each output token $k \in { 1 , \ldots , N _ { \mathrm { o u t } } }$ , we first define its average hidden state norm across all $L$ transformer layers as: Figure 4: Effect of the proportion of adversarial images within a batch $B = 2 0 \$ ) on hidden state norm statistics and output length/repetition. $$ \bar { r } _ { k } = \frac { 1 } { L } \sum _ { l = 1 } ^ { L } \left\| h _ { k } ^ { ( l ) } ( \mathbf { x } ^ { \prime } ) \right\| _ { 2 } , $$ where $h _ { k } ^ { ( l ) } ( { \bf x } ^ { \prime } )$ denotes the hidden state at layer $l$ corresponding to the $k$ -th output token. We then define the Repetition Promotion Loss as the mean of these norms across all output tokens, scaled by a regularization coefficient $\lambda _ { \mathrm { { r e p } } }$ : $$ \mathcal { L } _ { \mathrm { R e p } } ( \mathbf { x } ^ { \prime } ) = \frac { \lambda _ { \mathrm { r e p } } } { N _ { \mathrm { o u t } } } \sum _ { k = 1 } ^ { N _ { \mathrm { o u t } } } \bar { r } _ { k } . $$ Minimizing $\mathcal { L } _ { \mathrm { R e p } }$ (Eq. 8) drives down the magnitudes of output-time hidden states, reducing representational diversity and promoting repetition. This realizes the Generative Path Pruning Mechanism effect and significantly improves attack effectiveness beyond EOS-suppression alone. # 3.4 Overall Objective and Optimization To effectively craft adversarial images $( { \bf { x } } ^ { \prime } )$ as part of our LingoLoop Attack, our ultimate goal is to maximize the output token count $N _ { \mathrm { o u t } } ( \mathbf { x } ^ { \prime } )$ (Eq.(2)), subject to the constraint in Eq.(1). The combined objective integrates $\mathcal { L } _ { \mathrm { L P S } }$ (Sec. 3.2) and $\mathcal { L } _ { \mathrm { R e p } }$ (Sec. 3.3), with $\mathcal { L } _ { \mathrm { L P S } }$ scaled by factor $\alpha$ for numerical stability (see Supplemental Material). Following VerboseImages [14], dynamic weighting balances their contributions through: $$ \mathcal { L } _ { \mathrm { T o t a l } } ( x ^ { \prime } , t ) = \alpha \cdot \mathcal { L } _ { \mathrm { L P S } } ( x ^ { \prime } ) + \lambda ( t ) \cdot \mathcal { L } _ { \mathrm { R e p } } ( x ^ { \prime } ) . $$ Here, the dynamic weight $\lambda ( t )$ modulates the influence of $\mathcal { L } _ { \mathrm { R e p } }$ by comparing the magnitudes of the two losses from the previous iteration $( t { - } 1 )$ , scaled by a temporal decay function $\tau ( t )$ . $$ \lambda ( t ) = \frac { \| \mathcal { L } _ { \mathrm { L P S } } ( \mathbf { x } _ { t - 1 } ^ { \prime } ) \| _ { 1 } } { \| \mathcal { L } _ { \mathrm { R e p } } ( \mathbf { x } _ { t - 1 } ^ { \prime } ) \| _ { 1 } } \Big / T ( t ) . $$ The temporal decay function is defined as: $\mathcal { T } ( t ) = a \ln ( t ) + b$ , where $a$ and $b$ are hyperparameters controlling the decay rate. Momentum can also be applied when updating $\lambda ( t )$ from one iteration to the next to smooth the adjustments. This dynamic balancing adapts the focus between EOS suppression and repetition induction over time. The LingoLoop Attack minimize $\mathcal { L } _ { \mathrm { T o t a l } } ( x ^ { \prime } , t )$ via Projected Gradient Descent (PGD) [28] for $T$ steps, updating ${ \mathcal { L } } _ { \mathrm { T o t a l } }$ and projecting it back onto the $\ell _ { p }$ -norm ball centered at the original image $x$ . The detailed procedural description of the LingoLoop Attack is provided in Appendix $\mathbf { B }$ . # 4 Experiments # 4.1 Experimental Setting Models and Dataset. We evaluate our approach on four recent multimodal large language models: InstructBLIP [11], Qwen2.5-VL-3B-Instruct [1], Qwen2.5-VL-7B-Instruct [1], and InternVL3- 8B [8, 7]. InstructBLIP employs the Vicuna-7B language model backbone, while the Qwen2.5-VL-3B model utilizes the Qwen2.5-3B architecture, and both the Qwen2.5-VL-7B and InternVL3-8B models are built upon the Qwen2.5-7B language model architecture. Following the experimental protocol of Verbose Images [14], we assess all models on the image captioning task. To ensure methodological consistency and enable fair comparisons, we use the default prompt templates provided for each model. For evaluating the primary task performance and attack effectiveness, we utilize images from two standard benchmarks: MSCOCO [27] and ImageNet [12]. Our evaluation set comprises 200 randomly selected images from each dataset. For EOS prediction probability analysis by word category, we sample 5,000 images from each dataset (non-overlapping with evaluation sets). Table 1: Comparison of the LingoLoop Attack against baseline methods across four MLLMs (InstructBLIP, Qwen2.5-VL-3B, Qwen2.5-VL-7B, InternVL3-8B) on the MS-COCO and ImageNet datasets (200 images each). Metrics include generated token count, energy consumption (J), and inference latency (s). The best results for each metric are highlighted in bold. Attacks Settings. We compare our proposed method against several baselines, including original, unperturbed images, images with random noise added (sampled uniformly within the same $\epsilon$ budget as attacks), and the Verbose Images attack [14], which represents the state-of-the-art energy-latency attack for MLLMs. For generating adversarial examples using both the Verbose Images baseline and our method, the adversarial perturbations are optimized via the PGD [28] algorithm with $T = 3 0 0$ iterations. We enforce an $\ell _ { \infty }$ constraint with $\epsilon = 8$ on the perturbations and use a step size of $\eta = 1$ . During inference, MLLMs generate text sequences with a maximum token count of 1024 tokens using greedy decoding do_sample $\ c =$ False to ensure reproducibility. Following the experimental settings established by Verbose Images [14], we set the loss weight parameters to $a = 1 0$ and $b = - 2 0 .$ .The PGD updates are performed with momentum $m = 1 . 0$ and we fix $\pmb { \theta } _ { w } = 1 0 ^ { 5 }$ . Metrics. We primarily evaluate the effectiveness of our approach by measuring the number of tokens in the sequence generated by the MLLMs. Since increased sequence length inherently demands greater computational resources, it directly translates to higher energy consumption and inference latency, which are the ultimate targets of energy-latency attacks. Consequently, in addition to token count, we report the energy consumed (measured in Joules, J) and the latency incurred (measured in seconds, s)during the inference process [33]. All measurements were conducted on a single GPU with consistent hardware contexts: NVIDIA RTX 3090 for Qwen2.5-VL-3B, NVIDIA V100 for InstructBLIP, and NVIDIA H100 for Qwen2.5-VL-7B and InternVL3-8B. # 4.2 Main results To assess the efficacy of LingoLoop, we conducted extensive experiments using images from the MS-COCO and ImageNet datasets (200 images each). LingoLoop’s performance was benchmarked against three key conditions: (1) unperturbed clean inputs (‘None’); (2) inputs with added random noise (‘Noise’); and (3) adversarial inputs generated by (‘Verbose Images’) [14], the current stateof-the-art verbose attack. Table 1 summarizes the key metrics: generated token counts, inference latency, and energy consumption across various MLLMs. As shown in Table 1, random noise inputs produce outputs comparable to clean inputs, confirming naive perturbations cannot induce verbosity. In contrast, LingoLoop Attack consistently achieves significantly longer outputs and higher resource utilization. For MS-COCO images, it compels InstructBLIP to generate 1002.08 tokens ( $1 1 . 6 \times$ clean inputs, $3 . 0 \times$ Verbose Images) with $5 7 . 3 0 \mathrm { J }$ energy $1 1 . 7 \times$ and $2 . 4 \times$ higher). This pattern holds across models: Qwen2.5-VL-3B outputs 1020.38 tokens ( $1 5 . 3 \times$ clean, $2 . 6 \times$ Verbose Images) consuming 32.94 J ( $1 4 . 7 \times$ and $2 . 5 \times$ higher). The same near-maximal generation behavior occurs consistently on ImageNet and other MLLMs (Qwen2.5-VL7B, InternVL3-8B). The experimental findings in Table 1 establish LingoLoop’s superior capability in forcing MLLMs into states of extreme verbosity, leading to significant resource exhaustion. The consistent success in pushing diverse MLLMs to their output limits validates the effectiveness of LingoLoop’s core strategies: the POS-Aware Delay Mechanism and the Generative Path Pruning Mechanism, which work synergistically to achieve these results. # 4.3 Hyperparameter Optimization Repetition Induction Strength $( \lambda _ { \mathbf { r e p } } )$ We conduct an ablation study on $\lambda _ { \mathrm { { r e p } } }$ , the hyperparameter controlling the strength of the Repetition Induction loss $( \mathcal { L } _ { \mathrm { R e p } } )$ . This loss penalizes the $L _ { 2 }$ norm of hidden states in the generated output sequence to promote repetitive patterns. These experiments are performed on 100 images from the MS-COCO using the $\mathrm { Q w e n } 2 . 5 { \cdot } \mathrm { V } \mathrm { L } { \cdot } 3 \mathrm { B }$ , with attack parameters set to 300 iterations and $\epsilon = 8$ . As shown in Figure 5, varying $\lambda _ { \mathrm { { r e p } } }$ significantly impacts the attack’s effectiveness. A low $\mathrm { \dot { \lambda } } _ { \mathrm { r e p } }$ (e.g., 0.1) provides insufficient pressure on hidden states, resulting in limited repetition and lower token counts. As $\lambda _ { \mathrm { { r e p } } }$ increases, the constraint becomes stronger, effectively guiding the model towards repetitive patterns, which is reflected in the increasing token counts, Energy consumption, and Latency. However, excessively high $\lambda _ { \mathrm { { r e p } } }$ (e.g., 0.6, 0.7) might overly constrain the state space, potentially hindering even basic generation or leading to unproductive short loops, causing the metrics to decrease after peaking around $\lambda _ { \mathrm { { r e p } } } = 0 . 5$ . This demonstrates the necessity of finding an optimal balance for the hidden state magnitude constraint. Figure 5: Effect of $\lambda _ { \mathrm { { r e p } } }$ on Generated Token Counts, Energy, and Latency. Attack iterations To determine a suitable number of PGD steps for our attack, we conduct a convergence analysis on 100 randomly sampled images from the MSCOCO using the Qwen2.5-VL-3B model, under an $\ell _ { \infty }$ perturbation budget of $\epsilon = 8$ . As shown in Figure 6, our method (LingoLoop Attack) achieves rapid growth in generated token count and converges near the maximum output limit within 300 steps. Based on this observation, we set the number of attack iterations to 300 for all main experiments. For reference, we also include three partial variants using $\mathcal { L } _ { \mathrm { R e p } }$ , $\mathcal { L } _ { \mathrm { L P S } }$ , and their combination. Compared to the full method, these curves converge slower or plateau earlier, indicating that removing components not only affects final attack strength, but also hinders the optimization process. Figure 6: Convergence of generated token counts versus PGD attack steps for LingoLoop Attack and its components on MSCOCO (100 images). This supports our design choice to integrate both objectives for faster and more stable convergence. # 4.4 Ablation Study To analyze the LingoLoop Attack’s effectiveness and understand the contribution of its key components, we conduct ablation experiments. These studies are performed on image subsets from the MSCOCO [27] and ImageNet [12] datasets, utilizing the Qwen2.5-VL-3B model [1] for validation. Effect of loss objectives This ablation investigates the contribution of our proposed loss objectives, $\mathcal { L } _ { \mathrm { L P S } }$ and $\mathcal { L } _ { \mathrm { R e p } }$ . These experiments are conducted on 100 images from the MS-COCO dataset using the Qwen2.5-VL-3B model, with attack parameters set to 300 iterations and $\epsilon = 8$ . As shown in Table 2, employing a baseline with uniform EOS weights yields 843.86 generated Table 2: Ablation Study on Attack Modules. tokens. Using only $\mathcal { L } _ { \mathrm { L P S } }$ improves this to 926.94 tokens, highlighting the benefit of POS-weighted suppression in delaying termination. Conversely, using only $\mathcal { L } _ { \mathrm { R e p } }$ results in fewer tokens (561.90), as its primary focus is on state compression to induce repetition, not direct sequence lengthening. However, the combination of both $\mathcal { L } _ { \mathrm { L P S } }$ and $\mathcal { L } _ { \mathrm { R e p } }$ (without momentum) achieves significantly higher generated tokens (963.51), demonstrating the synergistic effect. This synergy arises because $\mathcal { L } _ { \mathrm { L P S } }$ creates the opportunity for extended generation by suppressing termination, while $\mathcal { L } _ { \mathrm { R e p } }$ exploits this opportunity by guiding the model into repetitive, high-volume output patterns. Table 3: Performance metrics under varying maximum token generation limits (max_new_tokens). Maximum output token As part of our ablation study, we investigate the performance of different attack methods under varying max_new_tokens limits. Using the Qwen2.5-VL-3B model, attacked with $3 0 0 { \mathrm { P G D } }$ steps $\epsilon = 8 \AA$ ), we measure the generated token count, inference latency, and energy consumption on 100-image subsets from MS-COCO and ImageNet. Table 3 presents these results. Original inputs terminate quickly. While Verbose Images [14] achieve increased output lengths, they consistently fail to reach the system’s maximum token limit. Our LingoLoop Attack, however, reliably drives token generation at or near the predefined max_new_tokens across all settings and datasets. This maximal token count directly leads to significantly higher latency and energy, demonstrating LingoLoop Attack’s superior capability to force maximum verbose output for resource exhaustion. # 4.5 Robustness against Defense Methods To validate LingoLoop Attack’s effectiveness against mitigation strategies, we evaluate it against model parameters controlling repetitive outputs. Table 4 shows Qwen2.5-VL-3B results on 100 MS-COCO images ( $P _ { 1 }$ : repetition_penalty, $P _ { 2 }$ : no_repeat_ngram_size). Under default settings, LingoLoop Attack substantially increases generated token counts and resource consumption compared to Clean and Verbose Images [14]. Increasing $P _ { 1 }$ to 1.10 slightly reduces the generated token counts for Clean and Verbose Images, while $P _ { 1 } = 1 . 1 5$ surprisingly increases their output tokens. This suggests that higher repetition penalties, while discouraging exact repeats, can sometimes push the model towards generating longer sequences that avoid immediate penalties. Our attack consistently achieves the maximum token limit (1024) across all tested $P _ { 1 }$ variations. Enabling $P _ { 2 } = 2$ (with $P _ { 1 } = 1 . 0 5 )$ unexpectedly increases the total number of tokens for Clean and Verbose Images. This likely occurs because preventing ngrams Table 4: Defense results on 100-image MSCOCO subset. $P _ { 1 }$ : repetition_penalty, $P _ { 2 }$ : no_repeat_ngram_size. forces the model to use alternative phrasing or structures, potentially leading to longer outputs. It also fails to prevent our attack from reaching the maximum generation limit. These results demonstrate that standard repetition controls are ineffective against the LingoLoop Attack.
Multimodal Large Language Models (MLLMs) have shown great promise but require substantial computational resources during inference. Attackers can exploit this by inducing excessive output, leading to resource exhaustion and service degradation. Prior energy-latency attacks aim to increase generation time by broadly shifting the output token distribution away from the EOS token, but they neglect the influence of token-level Part-of-Speech (POS) characteristics on EOS and sentence-level structural patterns on output counts, limiting their efficacy. To address this, we propose LingoLoop, an attack designed to induce MLLMs to generate excessively verbose and repetitive sequences. First, we find that the POS tag of a token strongly affects the likelihood of generating an EOS token. Based on this insight, we propose a POS-Aware Delay Mechanism to postpone EOS token generation by adjusting attention weights guided by POS information. Second, we identify that constraining output diversity to induce repetitive loops is effective for sustained generation. We introduce a Generative Path Pruning Mechanism that limits the magnitude of hidden states, encouraging the model to produce persistent loops. Extensive experiments demonstrate LingoLoop can increase generated tokens by up to 30 times and energy consumption by a comparable factor on models like Qwen2.5-VL-3B, consistently driving MLLMs towards their maximum generation limits. These findings expose significant MLLMs' vulnerabilities, posing challenges for their reliable deployment. The code will be released publicly following the paper's acceptance.
[ "cs.CL", "cs.CR" ]
# 1 Introduction By the developments of MLLMs [41, 79, 2, 71, 74, 75, 53, 23, 48, 60, 78], vision-based perception tasks have evolved toward a more comprehensive and interleaved understanding. Among these tasks, dense understanding, characterized by space-aware semantic interpretations of the physical world, has become increasingly important in real-world applications, including augmented and virtual reality (AR/VR) [32], autonomous driving [67], and embodied artificial intelligence [12]. Conventional methods [33, 65, 57, 18, 66, 70, 72, 73, 77] mainly rely on perspective images with limited FOV, which inherently lead to incomplete and biased understanding of densely structured environments, especially in scenarios involving extensive spatial interactions across wide FOVs. For example, in a scene where two individuals are waving to each other from opposite sides of the camera operator, a narrow perspective view may incorrectly interpret the interaction as both individuals waving directly at the camera operator. Such misinterpretations significantly affect the effectiveness of dense scene understanding, thereby highlighting the need for panoramic and space-aware approaches. An intuitive solution for enhancing global scene understanding is to feed MLLMs with multi-view images, either by recording video sequences using a single moving camera or by capturing single-shot views with multiple spatially distributed cameras. However, those methods such as BEVFormer [36] require explicit or implicit feature fusion across these multi-view images, which introduces substantial computational overhead and limits their practicality in real-world deployments. One question raised: Can we provide MLLMs with a compact input-level representation, rather than relying on fusion at the feature level? Preprint. Under review. Omnidirectional Panoramas Entity Masks Unique Referring Expressions SUIV tshirt and dar ponts hol oerson with short hair,wec ingablackjacketanddark pants,standing upright infrontof thecmea ng a dark andlight brown pants, standing centrally in the lower part of the frame e shir†, brown pants, and white ith g block ag across his torse up shirt, blue and black .ligh diewith zipper, black mask,anddark pants holding a nd, slightly behind and below the viewpoint o bloe k top ond dark pants walking through a crowc dark hair wearing a red long-sleeve shir† and dark shorts is holding α smartphone to g α block hoodie featurin g α red and black checkered hood, standing with a smallred logo on the upper back and dark brown pants, standing with edslighlyttefrotlefloerelevigle. shirtanddarkpants holding arectangular object. handsstandingamong adense crowd. ng α pr rome front grille, central emblem, Dense Entity-Level Caption t ssy finish prominent fron e,rectangularsyopeittaleddbckligtstiinpoinentlyedslity eople in α public setting, likely iduals are observing and copturing images of the alit greenlog ed short-sleeved shirt and darl slightlybel iqht hand. dbyp the back ring photo visual field tedglassskyscaperstandspromnentlyinthefront-left,digoallyangledupwadsagnst personwithshorthairis wearingablckjecketanddarkpants,standinguprightnear thecenterof thein e. They are positioned slightly diagonally dowm point of view. The person is surrounded by a crowd of individuals, mony of wl copturing the m with their smar s or cameras, suggesting participation in an event is wearing a dark een hoodie and light br wnDants.sTan Iing in the front-right lower par† of the frame. They are positioned among ehicle,surroudedbyodinalivelytin 武 right arm raised holding α small object, located towards the left and diagonally with a red logo, a light gray hoodi zipper ask, dark nd g blgck stre ssthechest.T are holding a blue ect intheirrig and.Theindividualislocated esand other indivic in a block suit standing on α vehicle holding a mi eleft sic enim shirt holdi artphone, standing near a car in the background d, long-sleeved plaid shirt and dark pont Entity-grounded panoramic scene descriptions 1 with short dark hair bustlingnightimeurban celebrationin whatoppears tobealc e surrounded by toweri g modern building ithshor † dark hai , wearing a dark shirt and light-colored shorts, standing and holding a sky t dows, parked with its front pe er door open, attention of α mass gathered in the ared oup>(17]c/ .n}nIn the SUVs stand out yhite SUV wi front grille,and sleek headlights positioned diogonally dow ward and to the back- onin‘ scarfbeing pported by onother individuel a roofra faceisvisible sligh A person with short,dark hair wearingablack slevelesstop andlight brow pants,holding ared object er the left should e capturing the event, situated among a dense crowd. the front-left,near the crowd 86,88. α dark ded igcket ycaptu ish with der gfolliag ir in daek clot ing upright. nggligh rt-sleeve shirt and dark pants, stands with his arm bent at the elbow sirt gctivelv event. On the far left below the comera\u2019s perspe ith visiblered tailight and people on top,locatedslightly front-right andcenter the ur streetligh e</p>(48) stands tall ,bustlin d ever Inspired by the omnidirectional representation used in 360-degree scene understanding, equirectangular projection (ERP) images provide an ideal input format to address the aforementioned limitations. ERP encodes a comprehensive spatial context in a single compact format, facilitating simultaneous temporal and spatial reasoning. However, research for this area remains underexplored, with notable deficiencies in both datasets and model understanding. To address these challenges, we propose a three-tier pipeline specific for panoramic dense understanding models. Specifically, this method leverages the visual semantics learned from the perspective domain while incorporating ERP-specific geometric priors. As a result, our pipeline enables the creation of a large-scale panoramic dataset comprising 160K panoramas with 5M dense entity-level captions, 1M unique referring expressions, and 100K entity-grounded scene descriptions (as shown in Fig. 7). Based on the proposed dataset, we further establish Dense360-Bench, a benchmark designed to evaluate MLLMs on omnidirectional captioning and grounding. Benchmarking existing MLLMs on Dense360-Bench reveals significant performance gaps compared to perspective scenarios, highlighting the limitations of conventional methods [79, 2] in ERP-specific dense understanding. Motivated by these challenges, we propose Dense360VLM, a vision–language framework with ERP-aware positional encoding, which achieves significant improvements for omnidirectional dense understanding. In summary, our contributions are threefold: • Dataset. We introduce the largest omnidirectional dense understanding dataset to date, featuring 160K panoramas with dense, reliability-scored annotations, supporting comprehensive visual–language understanding. • Benchmark. We establish Dense360-Bench, the first benchmark for evaluating and advancing research on MLLMs in omnidirectional captioning and grounding tasks. • Method. We propose ERP-RoPE, a positional encoding scheme specifically designed for ERP representations, adapting MLLMs to the geometric characteristics of panoramic ERP. Together, these contributions lay a solid foundation for advancing dense visual–language understanding in omnidirectional scenarios. # 2 Related Work Multimodal Large Language Models. The remarkable advancements in MLLMs [43, 41, 14, 11, 59, 63, 46, 55, 2, 79, 21] have demonstrated strong performance across various visual perception and understanding tasks [40, 26, 6, 69, 31, 20, 49, 22, 28, 5, 37, 39, 29, 45, 58]. CLIP [52], BLIP [35, 34], ALIGN [27], and other models [68, 76] employ contrastive learning to establish a shared embedding space between visual and textual modalities [50, 19, 4, 15]. Some approaches [1, 64] integrate textual features into visual models or inject visual features into textual models. Methods [44, 14, 42, 33] like LLaVA [44] utilize a projector to map visual embeddings into the feature space of large language models (LLMs). Building upon this paradigm, subsequent research efforts [8, 33, 42] have focused on constructing large-scale, high-quality instruction-following datasets for model pretraining and fine-tuning. Recent works [71, 9, 16, 17, 47] such as PixelSAIL [71] employ a single transformer architecture as a unified vision-language model. These methodologies eliminate the dedicated vision encoder and conduct joint co-training of visual and linguistic tokens on extensive multimodal datasets. Our work follows the Vision Encoder-Projector-LLM paradigm and introduces ERP-RoPE, a novel position encoding scheme specifically designed to handle omnidirectional panoramic inputs. Panoramas Datasets. KITTI-360 [38] has released a collection of urban panoramic images. EGOK360 [3] provides an egocentric $3 6 0 ^ { \circ }$ kinetic human activity video dataset. PanoVOS [61] and JRDB-PanoTrack [30] introduce panoramic video datasets focusing on video object segmentation tasks. The dataset most closely related to ours is $3 6 0 \mathrm { + } \mathrm { X }$ [7], which is captured from multiple viewpoints with multiple data modalities. However, it primarily focuses on tasks such as scene classification and action localization. Our Dense360 dataset supports dense scene understanding through a comprehensive suite of reliability-scored annotations. # 3 Dens360 Dataset and Benchmark Overall Pipeline. To construct an omnidirectional dense understanding dataset, we design a threetiered pipeline, as shown in Fig. 2. The Level-1 pipeline generates entity masks with granularity consistency (detailed in $\ S 3 . 1 )$ . The Level-2 pipeline produces dense captions for entities and assigns reliability scores to the generated captions (detailed in $\ S 3 . 2 \AA$ ). The Level-3 pipeline creates entitygrounded panoramic scene descriptions for omnidirectional panoramas (detailed in $\ S 3 . 3$ ). Dataset Statistics. The 160K ERP images in our Dense360 Dataset encompass diverse scene categories. As shown in Fig. 3(a), $3 2 . 7 4 \%$ of these ERP images depict indoor scenes such as home activities, indoor sightseeing, dinner parties, and gyms, while $6 7 . 2 6 \%$ represent outdoor scenes including outdoor sightseeing, street views, outdoor sports, and natural landscapes. We employ a dataset generation pipeline to automatically annotate 160K ERP images, incorporating quality control mechanisms for entity-level captions based on reliability scores. This process yields a large-scale corpus comprising 5 million dense, entity-level captions. As illustrated in Fig. 3(b), the spatial distribution of the 5 million annotated entities is uneven across the panoramic scenes: $4 0 . 5 5 \%$ are located in the front quadrant, followed by $2 1 . 9 7 \%$ in the right, $2 0 . 2 5 \%$ in the back, and $1 1 . 3 6 \%$ in the left. A small remaining portion is distributed across the top and bottom regions. Most entities represent humans and indoor entities such as couches, dining tables, and televisions. From the original set of 5 million entities, we filter out those with low-quality masks (e.g., perforated masks, small-area masks, masks containing multiple disconnected regions) and retain a high-quality subset of 1 million entities. These selected entities are then used to construct 1 million unique referring expressions based on their corresponding brief captions. Using the Level 3 pipeline, we generate 100K entity-grounded panoramic scene descriptions. As demonstrated in Fig. 3(c), within these 100K descriptions, each caption contains a median of 12 grounded entities, with a median token length of 519, indicating the fine-grained and information-dense nature of the generated annotations. Building Dense360-Bench for Grounding and Captioning. Grounding and captioning are two fundamental capabilities of MLLMs for omnidirectional dense understanding. To evaluate these capabilities, we introduce Dense360-Bench, a benchmark designed for assessment of grounding and captioning performance in ERP images. From a curated set of 1,279 ERP images, we select 3,000 entities and construct 3,000 grounding questions based on their brief captions, as well as 3,000 captioning questions derived from their detailed captions. These 3,000 entities are evenly distributed across the front, back, left, and right panoramic directions to ensure spatial diversity. For grounding tasks, we follow conventional protocols by requiring MLLMs to segment the corresponding entity in Level 1 Level 2 Tag Pipeline Pipetion Verification Detail coption:The Eiffel Twer isa tall,metaliclattice tower with a tapered,pntedpex,pinted inalight brocolor. stands prom ntlyinthebckgound,slightlytobcl nfromth amera’s viewpoint, occupying a central EntitySeg vehicles and people nearby Tallmetalliclattice tower with a tapered,pointed apex, painted inlight browm color. Merge e:0.92 masks A person wearing α wide-brimmed hat and α yellwfloraldressstandsslightlydiagonallydowwardand to the rightfromthecameraeargroupofindividulsgatheredon ore:0.87 Level3 Dense captions ★ ChatGPT4o centeedoki Entity-centric front view projection Tag pipeline paptin InternVL3 Enty Verification Zoom Entity CoT Prompt Grounding theentitydescribe by mask with tags <p>{brief caption}c/p> 上 RAM++ APE L (x1.y1,x2, y2) Mask 1 InternVL3 + RelicbitySscore Entisy Matchingby (PTsA pp Prompting SAMy bounding box AM Figure 2: Dataset Generation Pipeline. The Level-1 pipeline generates entity masks. The Level-2 pipeline produces dense captions. The Level-3 pipeline creates entity-grounded panoramic scene descriptions. ()Ditribtiostittio Figure 3: Dataset Statistics. We employ Qwen2.5VL-72B-Instruct [2] for recognizing scene categories. We utilize Qwen2.5-72B-Instruct [62] to identify entity categories from captions. We calculate length of tokens using the Qwen2.5 tokenizer [62]. the ERP image based on the brief caption. Thus, the performance is evaluated using the mask IoU metric. For captioning tasks, we design a cost-effective evaluation scheme. As shown in Fig. 4, we extract key phrases from each entity’s detailed caption. Given a predicted caption, we formulate a series of yes/no questions to determine whether each key phrase is explicitly mentioned. We employ ChatGPT-4o as the judge model. Based on its responses, we construct a binary vector indicating phrase coverage and compute the recall of key phrases as the final evaluation metric. # 3.1 Entity Masks (Level-1) At this stage, our objective is to extract granularity-consistent entities from omnidirectional ERP images. Compared to the traditional perspective images, the entities in ERP images exhibit three distinctive characteristics including high density, spatial continuity along the circle of latitude, and geometric distortion. These properties make it challenging to generate the corresponding entity masks directly from panoramic images. As illustrated in the Level 1 Pipeline of Fig. 2, we begin by partitioning the ERP image into three square slice views, each with a 1:1 aspect ratio, using overlapping windows with $50 \%$ stride to ensure spatial continuity. To preserve spatial continuity, especially at the circular boundary where the leftmost and rightmost edges of the ERP image represent adjacent regions, we introduce a fourth slice view by stitching these opposing boundaries together. Overall, this slicing strategy has two key advantages. On one hand, it Picked Phrases GT Mask Question to the Judge Model [TheEiffelllllior'o For each phrase in {Picked Phrases),determine color','back-left direction'] whetherit ismentioned inthe generated Score: 0.5 GT Caption pick 4 conveyed by the phrase is expressed in the The Eiffel Tower is a tall, metallic lattice tower with a tapered, generated caption. [1.0,1,0] pointed apex,painted in a light brown color.It stands prominentlyinthebackground,slightlytotheback-left direction from the camera's viewpoint, occupying a central position. Surrounding the tower,several cars and numerous Captien Predicted Caption ChatGPT4o Table 1: Chain-of-Thought to prompt MLLMs to generate entity captions step-by-step. simplifies the densely populated panoramic scene by decomposing it into square slice views with relatively simpler content, reducing visual complexity and aligning well with the input requirements of standard segmentation models. On the other hand, spatial continuity across the panoramic image is effectively preserved through the high overlap ratio among the four slice views. Additionally, we retain the original ERP image as an auxiliary input to accommodate large-scale entities that span the entire panoramic field, such as floors, ceilings, and sky regions. We employ CropFormer [51] to perform entity segmentation in both the four slice views and the complete ERP image, leveraging its good consistency to image views and granularity. Following entity segmentation, we project all generated masks back to the ERP view and compute a mask Intersection-over-Union (IoU) matrix across all entity instances. Union operations subsequently merge entity mask pairs that exhibit high IoU values $( > 0 . 7 )$ . This postprocessing step effectively integrates fragmented entity components into complete and granularity-consistent masks, while preserving precise boundary definitions. # 3.2 Entity Caption (Level-2) At this stage, our aim to obtain detailed captions that fully describe each entity and brief captions that capture essential information. Directly using visual prompts to instruct powerful existing MLLMs presents an off-the-shelf solution. However, the MLLMs such as InternVL3, Qwen2.5VL, and ChatGPT4o struggle to accurately align with user-provided visual prompt instructions, resulting in extremely low efficiency for this approach. Furthermore, such a solution lacks mechanisms for hallucination detection in generated captions and for verifying the alignment between captions and corresponding masks, leading to unreliable caption outputs. To address these issues, we design three specialized steps, as shown in the Level 2 Pipeline in Fig. 2. First, a tag pipeline extracts semantic information about entities. Next, multiple visual prompts and a semantic-enhanced Chain-of-Thought (CoT) prompt guide MLLMs to generate both brief and detailed captions stepwise. Finally, a verification pipeline evaluates the reliability of the caption for proofreading. Tag Pipeline. The tag pipeline integrates $R \mathrm { A M } + + [ 2 5 ]$ as a recognition model and APE[56] as a grounding model, enabling both category identification and spatial localization of entities. First, we perform a cubemap projection (CMP) on the ERP image based on the entity’s location to obtain an entity-centric front view. $\mathbf { R A M + + }$ takes this entity-centric front view as input and returns a set of recognized tags. These tags encompass diverse visual information, including entity categories, scene types, color attributes, and some visual characteristics e.t.c. Then, we directly prompt APE with all the tags to generate a segmentation mask that corresponds to each semantically meaningful tag. This process yields a set of mask–tag pairs produced by APE Finally, we establish matching relationships by calculating the mask Intersection over Union (IoU) between the entity and APE masks. The tags of matched APE masks are assigned to the corresponding entity, thereby acquiring semantic prior information for the entity. Caption Pipeline. We obtain brief captions and detailed captions by prompting InternVL3 [79] with combined visual prompts and CoT prompts enhanced by prior semantic information. Different types of visual prompts exhibit distinct advantages and face specific limitations. For example, zoomed-in images of entities can capture fine-grained visual details but sacrifice global contextual information. Bounding boxes can provide salient visual cues yet may introduce ambiguity when multiple entities are enclosed. Contour-based prompts help reduce such ambiguity but are less effective for non-compact or irregularly shaped entities. Mask-based prompts allow for precise entity specification, yet they may confuse surrounding visual information and disrupt perceptual coherence. We combine these four types of visual prompts in a multi-image format as input to MLLM, instructing the MLLM to progressively leverage these visual prompts alongside prior semantic information through step-by-step reasoning. CoT prompting process is designed to progressively guide the generation of entity-centric descriptions through a structured sequence of reasoning. As shown in Tab. 1, we have eight steps in such as process. It begins by assessing the quality of the entity mask to ensure accurate segmentation, followed by evaluating the semantic clarity of the entity’s category and identity. Then it identifies key visual and contextual attributes, leveraging prior tag information when available. Next, it recognizes relevant components associated with the entity, such as subparts, attached objects, and functional features. The entity’s spatial location is then determined within the panoramic context, aided by previously known location data. To capture dynamic context, the process observes all events and interactions that involve the entity across the temporal sequence. Based on the accumulated information, a unique and concise caption is synthesized. Finally, our caption process generates a detailed, entity-centric paragraph that integrates spatial, temporal, and semantic cues into a coherent narrative. Through those steps, we extract two distinct outputs: detailed captions that comprehensively describe each entity’s characteristics and brief captions that capture essential entity information. Verification Pipeline. We leverage the powerful grounding capabilities of InternVL3 [79] by instructing it to localize target entities based on brief captions. The resulting grounded bounding boxes are then used to prompt SAM [54], which generates segmentation masks within the specified regions. We note that the grounded SAM mask should be inherently aligned with the input caption, regardless of whether the caption accurately describes the intended entity. This alignment allows us to assess the consistency between the brief caption and the target entity by comparing the SAMgenerated mask with the ground-truth entity mask. Therefore, the IoU between the SAM mask and the entity mask serves as a quantitative reliability score for evaluating the caption’s accuracy in grounding the correct visual region. # 3.3 Entity-Grounded Panoramic Scene Description (Level-3) Leveraging the generated dense entity captions, we directly prompt GPT-4o to produce entitygrounded panoramic scene descriptions. As shown in the Level 3 Pipeline of Fig. 2, these descriptions enable a comprehensive, fine-grained, and densely grounded understanding of ERP images, effectively capturing both semantic detail and spatial context. # 4 Dense360 VLM To enable dense visual–language understanding from panoramic ERP, we introduce Dense360VLM, a vision–language model tailored for ERP inputs. This section presents its two core components: the ERP-specific positional encoding ERP-RoPE (§4.1) and its integration into the vision–language framework (§4.2). # 4.1 ERP–RoPE Unlike perspective images, Equirectangular Projection (ERP) images present two unique geometric characteristics: Horizontal continuity. Since the horizontal axis of ERP corresponds to the unfolding of latitude circles on the sphere, the spatial representation in ERP images does not maintain a direct correspondence with the physical 3D environment. As shown in Fig. 5(a), points A and E appear farthest apart in the ERP image, yet their actual physical positions should be adjacent. The distance from point A to D in the ERP image is greater than that from A to B, although their real-world physical distances should be equal. In actual physical space, the point farthest from A should be C rather than E. (a)An illustration of spatial continuity in ERF (b) The architecture of Dense360VLM Figure 5: The architecture of Dense360VLM. The left side illustrates the relationship between pixel positions in the ERP image and their corresponding physical spatial locations. The right side demonstrates a position encoding derived using ERP-RoPE, along with the architectural framework of our Dense360VLM. Table 2: Mathematical properties required for ERP horizontal positional encoding. Latitude-dependent distortion. ERP images also introduce latitude-dependent distortion, where horizontal lines at higher latitudes are physically shorter than those at lower latitudes. Consequently, this distortion causes the pixel-based information density to decrease progressively from the equator to the poles. Existing positional encodings—e.g. the rotary position embedding (RoPE) family—ignore both effects and yield sub-optimal performance on panoramic tasks. We therefore extend multimodal RoPE (mRoPE) and propose $E R P – R o P E$ , a task-agnostic positional encoding tailored for ERP inputs. To encode positional information for ERP images, we adopt a conventional two-dimensional coordinate system. Given a pixel located at $( h , w )$ , where $w \in [ 1 , W ]$ and $h \in [ 1 , H ]$ , its positional encoding is formulated as $( g ( h ) , f ( w ) )$ . Since the vertical coordinate $h$ does not exhibit specific geometric distortions, we directly define $g ( h ) = h$ . In contrast, the horizontal coordinate $w$ , which corresponds to the latitude circles of the sphere, requires special consideration due to the inherent horizontal continuity. We first summarize the mathematical properties that horizontal positional encoding in ERP images should satisfy in Tab. 2. Considering the latitude-dependent distortion in ERP images, the actual length of each latitude circle is $\cos \theta$ times the equatorial length, where $\theta \in [ - 9 0 ^ { \circ } , 9 0 ^ { \circ } ]$ represents the latitude value. We introduce a scaling factor $\gamma$ to control the spacing of positional encoding for pixels, ensuring that the total length of all latitude circles after scaling equals $H \times W$ : $$ \sum _ { \theta } \cos \theta \ : W \gamma = H \times W $$ From this constraint, we derive the value of $\gamma$ : $$ \gamma = { \frac { H } { \sum _ { \theta } \cos \theta } } $$ Finally, we reparameterize the pixel position $( h , w )$ in the ERP image as $( h , \gamma f ( w ) )$ . For the implementation of $f ( w )$ , we adopt a naive solution in our code. When $W$ is an odd number: $$ f ( w ) : = [ 1 , 2 , 3 , . . . , ( W + 1 ) / 2 , ( W + 1 ) / 2 , ( W + 1 ) / 2 - 1 , . . . , 3 , 2 ] . i n d e x \_ s e l e c t ( w ) $$ When $W$ is an even number: $$ f ( w ) : = [ 1 , 2 , 3 , . . . , W / 2 + 1 , W / 2 , . . . , 3 , 2 ] . i n d e x \_ s e l e c t ( w ) $$ In Fig. 5(b), we visualize the computed $f ( w )$ (W Index) for each ERP visual token using the above method. # 4.2 Integrate ERP-RoPE into the MLLM We follow the Vision Encoder-Projector-LLM paradigm, using Qwen2.5VL [2] as the baseline and implementing minor modifications based on it. We incorporate ERP-RoPE to provide customized positional encoding for ERP visual tokens. For other inputs such as perspective images and texts, we employ mRoPE [2] to conduct positional encoding. Although Qwen2.5VL inherently possesses grounding capability with textual bounding boxes, these bounding boxes are inadequate for addressing distortion issues and horizontal continuity challenges in ERP images. For instance, an entity located at the back of a panoramic scene would be split into two parts in an ERP image, appearing at the leftmost and rightmost ends, respectively. A viable solution [65, 29, 71] involves injecting a special token “[SEG]” into Qwen2.5VL’s vocabulary to represent entities in textual responses, which can then be decoded into corresponding masks using SAM [54]. Adopting this streamlined approach - integrating ERP-RoPE into Qwen2.5VL and augmenting the vocabulary with the “[SEG]” token - we have developed Dense360VLM to enable dense understanding of ERP image. # 5 Experiments Evaluation Benchmark. Since there is currently no benchmark to evaluate the dense understanding capabilities of MLLMs in omnidirectional panoramas, we constructed our own Dense360-Bench. We assess MLLMs’ understanding of omnidirectional panoramas through entity captioning and visual grounding. For grounding, we use mask IoU as the evaluation metric; for captioning, we employ recall rate as the metric. Baseline Model and Training Datasets. We adopt Qwen2.5VL [2] as the baseline model to validate the effectiveness of our data and ERP-RoPE. The model is trained using $30 \%$ of the LLaVA SFT data [41] and the full Dense360 dataset. Implementation Details. We initialize Dense360VLM with the weights of Qwen2.5VL-3B-Instruct, implemented via the Xtuner codebase [13]. The visual encoder is frozen, while the Qwen2.5 LLM decoder is fine-tuned using LoRA [24]. We train Dense360VLM using 8 H20 GPUs. # 5.1 Main Results As shown in Tab. 3, we report the scores of the latest ChatGPT4o, two foundational MLLMs [79, 2], and two specialized small-scale expert MLLMs developed from them on Dense360-Bench. Among them, SA2VA-4B [65] is developed based on InternVL2.5 [10], while Dense360VLM-3B is built upon Qwen2.5VL [2]. Here, we conduct a second post-training of SA2VA-4B using the Dense360 dataset to enable its comprehension of ERP image inputs. To test grounding capability, we first prompt Qwen2.5VL and InternVL3 to output textual grounding bounding boxes for given expressions, then use these bounding boxes to prompt SAM [54] for generating corresponding masks. This process allows calculation of the mask IoU metric, which aligns with our verification pipeline (details in $\ S 3 . 2 \AA$ . Notably, SA2VA and Dense360VLM can directly output grounding masks. We report not only the overall scores of MLLMs on entities across all directions (omnidirection), but also their performance specifically on front, back, left, and right directions - with particular emphasis on the back direction. In ERP images, entities in the back direction are split between the extreme left and right edges. Accurate comprehension of back-direction entities most directly reflects a MLLM’s spatial understanding of panoramic scenes. The two open-source MLLMs [79, 2] achieve relatively low scores in both captioning and grounding, which may be attributed to their lack of training data on ERP images. The latest ChatGPT4o demonstrates exceptional panoramic scene understanding capabilities, attaining a zero-shot captioning score of 46.42 that approaches the performance of in-domain trained SA2VA-4B [65] and Dense360VLM3B. Remarkably, ChatGPT4o shows no performance degradation (even slightly higher at 49.33) in the challenging back direction compared to other directions. When trained on the Dense360 dataset, our Dense360VLM-3B outperforms SA2VA-4B in both captioning (51.78 vs 47.80) and grounding (76.81 vs 74.39). Furthermore, Dense360VLM mitigates the performance gap in back-direction captioning, benefiting from our specially designed ERP-RoPE for ERP image inputs. Table 3: Benchmark results. MLLM $\dagger$ denotes the MLLM that has undergone a second round of post-training using our Dense360 dataset. Table 4: The effectiveness of our Dense360 dataset and ERP-RoPE. Dense360-Caption denotes dense entity-level captions, Dense360-RefSeg refers to unique referring expressions, and Dense360-GCG represents entity-grounded panoramic scene descriptions. # 5.2 Ablation Study and Analysis To validate the effectiveness of our Dense360 Dataset and ERP-RoPE, we conducted comprehensive ablation studies as shown in Tab. 3. When using the full dataset, integrating ERP-RoPE significantly enhances the MLLM’s capability to understand panoramic scenes. Specifically, through ERPRoPE integration, Dense360VLM achieves a 5.92-point improvement in captioning performance and a 16.38-point boost in grounding accuracy. Under identical ERP-RoPE integration conditions, utilizing the complete Dense360 data yields the highest scores for captioning and grounding tasks, demonstrating that our data components do not exhibit mutual suppression.
Multimodal Large Language Models (MLLMs) require comprehensive visual inputs to achieve dense understanding of the physical world. While existing MLLMs demonstrate impressive world understanding capabilities through limited field-of-view (FOV) visual inputs (e.g., 70 degree), we take the first step toward dense understanding from omnidirectional panoramas. We first introduce an omnidirectional panoramas dataset featuring a comprehensive suite of reliability-scored annotations. Specifically, our dataset contains 160K panoramas with 5M dense entity-level captions, 1M unique referring expressions, and 100K entity-grounded panoramic scene descriptions. Compared to multi-view alternatives, panoramas can provide more complete, compact, and continuous scene representations through equirectangular projections (ERP). However, the use of ERP introduces two key challenges for MLLMs: i) spatial continuity along the circle of latitude, and ii) latitude-dependent variation in information density. We address these challenges through ERP-RoPE, a position encoding scheme specifically designed for panoramic ERP. In addition, we introduce Dense360-Bench, the first benchmark for evaluating MLLMs on omnidirectional captioning and grounding, establishing a comprehensive framework for advancing dense visual-language understanding in panoramic settings.
[ "cs.CV" ]
# 1 Introduction In today’s rapidly evolving information landscape, distinguishing fact from misinformation is becoming more challenging, especially with the rise of AI-generated content. Robust claim verification systems, leveraging NLP methods to automatically assess the veracity of claims (Glockner et al., 2022a,b; Thorne and Vlachos, 2018), are essential to ensure information reliability. Effective methods require not only accuracy but also transparency, necessitating strong reasoning to identify evidence and provide clear justifications (Pan et al., 2023). Figure 1: Different claim verification paradigms: (a) Unstructured Text-based methods focusing on claim decomposition and sequential reasoning over text, (b) KG-based methods facing challenges in entity resolution and structured reasoning, and (c) ClaimPKG’s unified framework with specialized modules for pseudosubgraph generation, retrieval, and general reasoning. Most existing verification approaches focus on unstructured text corpora, using techniques like chain-of-thought (CoT) reasoning (Wei et al., 2022) to break down claims for verification. Approaches like ProgramFC (Pan et al., 2023) and FOLK (Wang and Shu, 2023) employ modular pipelines to verify claims against text-based knowledge bases (Figure 1(a)). However, the inherent limitations of text representation pose challenges. Specifically, ambiguous entity references and complex multihop relationships make it difficult to perform rigorous verification against unstructured text. In contrast, Knowledge Graphs (KGs) provide structured relationships for effective reasoning (Luo et al., 2024; Sun et al., 2024), yet their use in claim verification remains limited. Existing KGbased approaches (Figure 1(b)) (Kim et al., 2023b; Zhou et al., 2019; Kim et al., 2023a) lack end-toend solutions, often requiring pre-extracted entities via modules like entity or relation extraction. Meanwhile, despite excelling at general reasoning, LLMs struggle with KG-specific tasks like entity resolution and multi-hop reasoning (Cao et al., 2021; Aly et al., 2021), suggesting the need for a system combining LLM capabilities with KG-based inference. Overall, solving claim verification problems is hindered by following major limitations: (1) Entity Ambiguity: Systems must accurately disambiguate entities within claims to identify relevant evidence (Aly et al., 2021); (2) Multihop Reasoning: Complex claims often require reasoning across multiple evidence from different sources (Pan et al., 2023; Wang and Shu, 2023); and (3) Limited integration of KGs and LLMs: Current approaches are underexploring the potential of combining the application of structured representation with strong inference capabilities of LLMs (Kim et al., 2023a). To address these challenges, we propose ClaimPKG (Claim Verification using PseudoSubgraph in Knowledge Graphs), a novel endto-end framework that synergizes the adaptability and generalization strengths of LLMs with the structured and rigorous representation of KGs to enable robust and transparent claim verification. As specified in Figure 1(c), ClaimPKG operates through three phases: (1) Pseudo-Subgraphs Generation: A KG-specialized lightweight LLM generates pseudo subgraphs as the representations of input claims under a Trie-based KG-Entity Constraint, ensuring the correctness of extracted entities; (2) Subgraphs Retrieval: A retrieval algorithm considers generated pseudo subgraphs as queries to identify actual relevant KG subgraphs as evidence; and (3) General Reasoning: A generalpurpose LLM reasons over the retrieved KG subgraphs to produce the verdict and human-readable justifications. Through extensive experiments on the FactKG dataset, ClaimPKG achieves state-ofthe-art performance, demonstrating its effectiveness over various claim types with a small number of training samples. Furthermore, its zero-shot generalizability to unstructured datasets (HoVer, FEVEROUS) highlights its robustness. Our contributions can be summarized as follows: (1) We introduce ClaimPKG, a holistic framework that integrates LLMs and KGs for accurate and interpretable claim verification, handling various types of claims in a unified manner; (2) We develop a lightweight specialized LLM with its according decoding algorithm for pseudo-subgraph generation and pair it with general-purpose LLMs to achieve robust reasoning; and (3) We validate the effectiveness of ClaimPKG through extensive experiments, achieving state-of-the-art performance on structure-based datasets and generalizing to unstructure-based datasets. # 2 Related Work Claim Verification Approaches. Claim verification systems utilize knowledge bases that can be categorized into unstructured and structured formats. In the unstructured domain, text-based verification methods predominate, with systems designed to verify claims against textual evidence, as demonstrated in the FEVER dataset (Thorne et al., 2018). Recent advances have focused on handling specialized verification scenarios, including ambiguous question-answer pairs (Park et al., 2022), detecting factual changes (Schuster et al., 2021), and processing multiple documents concurrently (Jiang et al., 2020). For structured verification, research has primarily focused on tables and graphs, with early work developing specialized architectures: graph neural networks for knowledge graph processing (Zhou et al., 2020), table-specific transformers (Herzig et al., 2020), and tree-structured decoders for hierarchical data (Wang et al., 2020). Claim Verification over Knowledge Graphs (KGs). The emergence of Large Language Models (LLMs) has simplified direct reasoning over textual corpora for claim verification, as demonstrated by ProgramFC (Pan et al., 2023) and FOLK (Wang and Shu, 2023). However, structured data sources like tables and graphs can provide more grounded and robust verification results (Kim et al., 2023b). Knowledge graphs are particularly advantageous as they enable explicit representation of reasoning processes through logical rules over nodes and edges. FactKG (Kim et al., 2023b) established a foundation in this direction by introducing a comprehensive dataset for evaluating modern verification methods. KG-GPT (Kim et al., 2023a) followed this work by demonstrating performance gains through a pipeline that performs sentence decomposition, subgraph retrieval, and logical inference. Additionally, while not directly addressing claim verification, StructGPT (Jiang et al., 2023) and RoG (Luo et al., 2024) achieved promising results in related tasks (e.g., Knowledge Base Question Answering) by collecting relevant evidence, such as subgraphs in KGs, then leveraging LLMs for complex reasoning in particular scenarios. # 3 Preliminary Knowledge Graph: Knowledge Graph (KG) $\mathcal { G }$ represents facts as triplets of format $t = ( e , r , e ^ { \prime } )$ , where entities $\boldsymbol { e } , \boldsymbol { e } ^ { \prime } \in \mathcal { E }$ are connected by a relation $\boldsymbol { r } \in \mathcal { R }$ ; $r$ can also be referred as $\boldsymbol { r } ( \boldsymbol { e } , \boldsymbol { e } ^ { \prime } )$ . Claim Verification: Given a claim $c$ , a verification model $\mathcal { F }$ determines its veracity as Supported or Refuted based on an external knowledge base $\kappa$ , while also providing a justification $j$ to explain the predicted label. This work specifically considers the scenario where $\kappa$ is structured as a Knowledge Graph $\mathcal { G }$ , enabling reasoning over graph knowledge to infer $v$ and $j$ . Formally, the verification process is defined as: $( v , j ) = \mathcal { F } ( c , \mathcal { G } )$ . Trie-based Constrained Decoding: A Trie (Wikipedia, 2025b) indexes predefined token sequences, where each root-to-node path represents a prefix. During LLM generation, this structure restricts token selection to only valid Trie paths, ensuring reliable output. # 4 ClaimPKG # 4.1 Formulation of ClaimPKG We formulate the ClaimPKG framework using a probabilistic approach. Given a claim $c$ and a prebuilt $\operatorname { K G } { \mathcal { G } }$ , our objective is to model the distribution $p _ { \theta } ( v , j | c , \mathcal { G } )$ , where $v$ denotes the verdict and $j$ the justification. However, direct computation for this distribution is infeasible as reasoning over the entire KG is not practical given its large size. To address this, we propose to select $ { \boldsymbol { S } } _ { c }$ , a subgraph of $\mathcal { G }$ relevant to $c$ containing necessary information to derive our target distribution. Treating $ { \boldsymbol { S } } _ { c }$ as a latent variable, $p _ { \theta } ( v , j | c , \mathcal { G } )$ is decomposed as: $$ p _ { \theta } ( v , j | c , \mathcal { G } ) = \sum _ { \mathcal { S } _ { c } } p _ { \theta } ( v , j | c , \mathcal { S } _ { c } ) p _ { \theta } ( \mathcal { S } _ { c } | c , \mathcal { G } ) $$ where $p _ { \theta } ( \mathcal { S } _ { c } | c , \mathcal { G } )$ models the subgraph selection, and $p _ { \theta } ( v , j | c , S _ { c } )$ models the generator of the verdict and justification given $ { \boldsymbol { S } } _ { c }$ . However, direct computation of $p _ { \theta } ( S _ { c } | c , \mathcal { G } )$ is challenging due to modality mismatch between the input $c$ (text) and the target $ { \boldsymbol { S } } _ { c }$ (graph structure), hindering the employment of retrieval methods for $ { \mathcal { S } } _ { c }$ . To bridge this gap, we decompose the subgraph selection into: $$ p _ { \theta } ( S _ { c } | c , \mathcal { G } ) = \sum _ { \mathcal { P } _ { c } } p _ { \theta } ( S _ { c } | \mathcal { P } _ { c } , \mathcal { G } ) p _ { \theta } ( \mathcal { P } _ { c } | c , \mathcal { G } ) $$ where $p _ { \theta } ( \mathcal { P } _ { c } | c , \mathcal { G } )$ models the generation of the graph representation $\mathcal { P } _ { c }$ , which we refer as “pseudo subgraph”, from a textual claim $c$ , and $p _ { \theta } ( S _ { c } | \mathcal { P } _ { c } , \mathcal { G } )$ models the distribution over relevant subgraphs $ { \boldsymbol { S } } _ { c }$ given $\mathcal { P } _ { c }$ . While equations 1 and 2 establish our theoretical framework for ClaimPKG, computing exact probabilities by summing over all possible $( S _ { c } , \mathcal { P } _ { c } )$ pairs is intractable. Addressing this we propose two approximations: (1) We infer the veracity using only the most relevant subgraph $S _ { c } ^ { * }$ : $$ ( v ^ { * } , j ^ { * } ) \sim p _ { \theta } ( v , j | c , S _ { c } ^ { * } ) $$ (2) We assume each generated pseudo-subgraph is reasonable with a high probability, allowing us to approximate the subgraph selection in 2 as: $$ S _ { c } ^ { ( i ) } = \arg \operatorname* { m a x } p _ { \theta } ( S _ { c } | \mathcal { P } _ { c } ^ { ( i ) } , \mathcal { G } ) $$ with $\mathcal { P } _ { c } ^ { ( i ) }$ is the ith pseudo-graph generation. We then construct $S _ { c } ^ { * }$ by aggregating multiple sampled subgraphs, specifically Sc = Sc( . These approximations lead ClaimPKG to comprise 3 key modules as depicted in Figure 2: (1) Pseudo Subgraph Generation to generate graph representations $\mathcal { P } _ { c }$ ’s given claim $c$ ; (2) Subgraph Retrieval to retrieve relevant evidence subgraph $S _ { c } ^ { * }$ ; and (3) General Reasoning to generate final verdict $v$ and justification $j$ . The inference procedure is described as follows: # Inference Procedure of ClaimPKG Preprocessing: Index the KG $\mathcal { G }$ into an Entity Trie for effective entity lookup. 1. Pseudo Subgraph Generation: Genergraphs) $\bar { \mathbb { P } } _ { c } = \bar { \{ \mathcal { P } _ { c } ^ { ( i ) } \} } _ { i = 1 } ^ { \bar { N } }$ from claim $c$ , using a Trie constraints. 2. Subgraph Retrieval: Use each pseudo graph in $\mathbb { P } _ { c }$ for querying the most respective relevant subgraph ${ \mathcal { S } } _ { c } ^ { ( i ) }$ in the $\operatorname { K G } { \mathcal { G } }$ , resulting in a set of {Sc( }iN=1 following Equation 4, then aggregate them to form Sc = iN=1 Sc( . 3. General Reasoning: Employ a generalpurpose LLM to reason veracity $( v ^ { * } , j ^ { * } ) \sim$ $p _ { \theta } ( v , j | c , S _ { c } ^ { * } )$ following Equation 3. The subsequent sections provide details about each component in the ClaimPKG framework. (1) Pseudo Subgraph (3) General Reasoning Generation Claim: Khalid Mahmood is the leader of a city which was the birthplace of architect, Vedat Tek, who designed 103 Colmore Row and I.C.Tower. Justification: Vedat Tek did design İzmit Clock Tower but dit not design 103 Colmore Row; John Madin did. Vedat Tek was born in Istanbul. Khalid Mahmood is SpeLciLaMlized <uen>kIn.oC.w nT_o0w e|r|< /\~eb>ir|t|h aprlcahciet e|c|t< e||>V<e>dVaetdTaetkT<<e/ke<>/e> GeLnLeMral aoVsfe srVodecidacate: dTFeawkli.steh Birmingham, not the birthplace → Constrain 广 Entity Indexing 103 Vedat 103 营 □ Co1Rl0mo3wore VTeedkat : 7 CoRlmowore unk_1 Tek birth ... arBcrhiuteaclitsutre CoRlmowore place 2 W unk_2 VTeedkat architect location Entity Relation TIo.wC.er VTeedkat : A TIo.wC.er VTeedkat architectarchitectIstanbul MJaodhin Bihraming Explicit Pseudo unk_0 VTeedkat I.C. leader name Unknown Vedat Tower Jeff 4 CaTndridplaette Types unk_0 M. Group and Khalid. Retrieve and APnaklarsa RBoaorkoenr, 》 MKahamliodod Annotation (2) Subgraph Retrieval # 4.2 Pseudo Subgraph Generation The first step to effectively verify a claim is to understand its content thoroughly and represent it in a format compatible with the KG. Since evidence comes from KG, representing claims in the graph format is crucial, which captures hypothetical relations among entities in an effective way that enables effective comparisons with KG subgraphs for evidence retrieval. However, this process faces two main challenges: (1) handling ambiguity resolution and multi-hop reasoning, and (2) ensuring accurate entity extraction from the claim. Specialized LLM. To address the first challenge, the Pseudo Subgraph Generation module employs a lightweight model optimized for processing input claims. Following (Li et al., 2013; Miwa and Bansal, 2016), the model is trained to jointly extract entities and their corresponding relations from a claim $c$ . Specifically, from $c$ the model constructs a pseudo subgraph $\mathcal { P } _ { c }$ comprising triplets in the form of head_entity relation tail_entity (illustrated in Figure 2). To ensure the generated subgraph can identify entities requiring ambiguity resolution and multi-hop reasoning, we employ a specialized annotation mechanism: when the claim references an entity indirectly—either without explicit naming or through relations to other entities—we denote it as unknown_ $\mathbf { \Xi } _ { i }$ , with the index $i$ to keep track of different entities. This notation effectively signals the need for further disambiguation and reasoning within the KG in subsequent steps. Training details enabling this annotation strategy are presented in Appendix B.1. Trie-Constrained Decoding. For the second challenge, we develop a constrained decoding algorithm with an Entity Trie inspired by (Cao et al., 2021). We construct a trie $\tau$ from the KG’s entity set $\mathcal { E } = \{ e _ { 1 } , e _ { 2 } , . . . \}$ . The specialized LLM generates entities using special tokens $\langle e \rangle$ and $\langle / e \rangle$ to mark entity boundaries. When $\langle e \rangle$ is generated, the decoding process restricts token selection based on $\tau$ until $\langle / e \rangle$ is produced, ensuring all generated entities exist in the KG. Outside such boundaries, the model generates relations by sampling from an unconstrained original token distribution. This mechanism ensures entity reliability while preserving flexible relation extraction (Edge et al., 2024). Multiple Representations. In order to capture different semantic views of a claim, we employ beam search along with the described sampling strategy, which is proved to improve the coverage of extracted triplets (table 8), resulting in multiple representations $\mathbb { P } _ { c } = \{ \mathcal { P } _ { c } ^ { ( i ) } \} _ { i = 1 } ^ { N }$ for an input claim. In summary, each of the claim’s graph representations satisfies following properties: (1) effectively capture the underlying graph structure of that claim, and (2) correctly align with the KG’s entities. # 4.3 Subgraph Retrieval The second component of ClaimPKG involves retrieving relevant KG subgraphs as evidence by using a dedicated algorithm that matches the pseudosubgraphs $\mathcal { P } _ { c }$ ’s from the previous step to actual subgraphs in the KG. We present the high-level description of our algorithm here, while its complete formulation is detailed in Appendix D. We categorize triplets in a $\mathcal { P } _ { c }$ into: (1) Incomplete triplets, where either the head or tail entity is marked as unknown, and (2) Complete triplets, where both head and tail entities are explicitly identified. Relation Scoring Function: We define a function $\mathrm { S i m } ( r _ { 1 } , r _ { 2 } )$ to quantify the similarity between two relations, where a higher score indicates greater similarity. This function can be instantiated via various mechanisms (e.g., embedding similarity, re-ranking, fuzzy matching, etc.). Incomplete Triplets Retrieval: Our goal is to identify evidence (actual triplets in the KG) to inform us about entities marked as unknown and their respective relations with explicit entities in the pseudo-subgraphs. First, for a $\mathcal { P } _ { c }$ , we group triplets sharing the same unknown entity $u$ into a group $g$ (e.g., in Figure 2, triplets associated with unknown_0 are grouped together). Subsequently, for each group $g$ characterized by the unknown entity $u$ , we denote: $\mathcal { E } _ { u } = \{ e _ { u 1 } , . . . , e _ { u n } \}$ as entities directly connected to $u$ in the pseudo-subgraph $\mathcal { P } _ { c }$ and $\mathcal { R } _ { u } = \{ r _ { u 1 } , . . . , r _ { u n } \}$ as relations from $u$ to corresponding entities in $\mathcal { E } _ { c }$ . In $g$ , for each explicit entity $e _ { u i } \in \mathcal { E } _ { u }$ , we first retrieve candidate set $C _ { u i } = \{ e _ { i 1 } ^ { c } , . . . , e _ { i m } ^ { c } \}$ containing all entities connected to $e _ { u i }$ in the KG, then collect all candidate sets into $\mathcal { C } _ { u } = \{ C _ { u 1 } , . . . , C _ { u n } \}$ . To determine the best candidates for resolving $u$ , we propose an Entity Scoring mechanism, which is based on two assumptions: (1) since $u$ has pseudo relations with all entities in $\mathcal { E } _ { u }$ , a candidate $e ^ { c }$ connected to more entities in $\mathcal { E } _ { u }$ is more likely to resolve $u$ ; and (2) because every information related to $\boldsymbol { e } _ { u i }$ and $u$ is crucial to verify the initial claim, each candidate set $C _ { u i }$ must contribute to the final verification. Note that an entity can appear in multiple candidate sets, hence we compute a “global” score for each $e _ { i j } ^ { c }$ in a candidate set $C _ { u i }$ : $$ \begin{array} { r } { s c o r e ( e _ { i j } ^ { c } ) = \sum _ { r } ^ { R _ { i j } ^ { u } } \mathrm { S i m } ( r _ { u i } , r ) } \end{array} $$ with $\begin{array} { r } { R _ { i j } ^ { u } = \bigcup _ { i = 1 } ^ { | \mathcal { E } _ { u } | } \{ r ( e _ { u i } , e _ { i j } ^ { c } ) \mid } \end{array}$ if $e _ { i j } ^ { c } \in C _ { u i } \}$ , the set of all relations across candidate sets appearing in $\mathcal { C } _ { u }$ that connect $e _ { i j } ^ { c }$ with an $e _ { u i }$ . Subsequently, to construct the set $T _ { u }$ of most relevant triplets to a group $g$ , we employ a ranking function as follows: $$ T _ { u } = \bigcup _ { i = 1 } ^ { | \mathcal { C } _ { u } | } \operatorname * { a r g m a x } _ { t r i p l e t , k _ { 1 } } \{ \pi _ { i j } | j \leq | C _ { u i } | \} $$ with $\pi _ { i j }$ is simply $s c o r e ( e _ { i j } ^ { c } )$ and $( t r i p l e t , k _ { 1 } )$ denotes the selection of top $k _ { 1 }$ triplets $( e _ { u i } , r , e ^ { c } )$ having the highest global scores from each set in $\mathcal { C } _ { u }$ . While equation 5 ensures candidates appearing in multiple candidate sets and having high similar scores are prioritized, equation 6 ensures every entity in $\mathcal { E } _ { u }$ has at least $k _ { 1 }$ triplets, both of which make use of assumptions (1) and (2). Complete Triplets Retrieval: For each triplet $( e _ { 1 } , r , e _ { 2 } )$ in a $\mathcal { P } _ { c }$ , we first find top $k _ { 2 }$ similar relations between $e _ { 1 }$ and $e _ { 2 }$ in the KG $\mathcal { G }$ using the Sim function. If no direct connection exists (e.g., "103 Colmore Row" and "Vedat Tek" as shown in figure 2), the triplet is decomposed into two: $( e _ { 1 } , r , \mathrm { { u n k n o w n } _ { 0 } ) }$ and $( \mathrm { u n k n o w n } _ { 0 } , r , e _ { 2 } )$ . These are then handled via Incomplete Triplets Retrieval. Subgraph Union: In summary, for an input claim $c$ , multiple pseudo-graphs are generated, containing complete and incomplete triplets. These triplets undergo processing to handle shared unknown entities and identified entities that are not connected in the KG $\mathcal { G }$ , and are used to query $\mathcal { G }$ for relevant triplets. All retrieved evidence triplets are aggregated into a final subgraph $S _ { c } ^ { * }$ , serving as the evidence for the final component of ClaimPKG. # 4.4 General Reasoning The General Reasoning module concludes the ClaimPKG framework by determining claim veracity through reasoning over input claim $c$ and retrieved evidence subgraph $S _ { c } ^ { * }$ . As complex tasks, especially claim verification, require deliberate chain-of-thought reasoning (Jiang et al., 2020; Wang et al., 2023), we use a general-purpose LLM to analyze $c$ and $S _ { c } ^ { * }$ . Using carefully designed prompts (Figure 6), the module generates a natural language justification $j$ and verdict $v$ . Expanded from equation 3, this step is formalized as: $$ p _ { \theta } ( v , j | c , S _ { c } ^ { * } ) = p _ { \theta } ( v | c , j , S _ { c } ^ { * } ) p _ { \theta } ( j | c , S _ { c } ^ { * } ) $$ where $p ( j | c , S _ { c } ^ { * } )$ produces the justification and $p ( v | c , j , S _ { c } ^ { * } )$ determines veracity. This modelagnostic design enables integration with state-ofthe-art LLMs (e.g., Llama, Qwen and GPT4) for zero-shot reasoning. # 5 Experiments # 5.1 Experimental Setup Datasets. Our primary benchmark is the FactKG dataset (Kim et al., 2023b), designed for claim verification over the DBpedia KG (Lehmann et al., 2015). It consists of 108K claims grounded in DBpedia and labelled as either SUPPORTED or REFUTED. The claims span five distinct categories: One-hop, Conjunction, Existence, Multihop, and Negation, each posing unique challenges. For evaluation, we randomly sample 2K claims from the test set, ensuring balanced representation across categories under computational efficiency. To assess the generalizability of ClaimPKG beyond structured benchmarks, we also evaluate HoVer (Jiang et al., 2020) and FEVEROUS (Aly et al., 2021), two widely-used unstructured-based benchmarks requiring multi-hop reasoning and evidence aggregation from Wikipedia. Additional statistics of datasets are provided in Appendix A. Metrics. We use Accuracy as the primary metric along with Entity Correctness to measure if the claim’s extracted entity is valid in KG. Additionally, for the FactKG dev set, we report Claim Structure Coverage, which quantifies the proportion of triplets from the original claim’s graph structure successfully reconstructed by our pipeline. We refer readers to Appendix C for more details. Annotation. For brevity, we use Llama-3B, Llama70B, and Qwen-72B to refer to Llama-3.2-3B, Llama-3.3-70B, and Qwen2.5-72B respectively. The \* symbol denotes models fine-tuned for pseudo subgraph generation. Full model names are used when necessary. Baselines. We compare ClaimPKG with recent KG-based claim verification methods: Zero-shot CoT (Wei et al., 2022) prompts LLMs to generate rationales and verdicts without accessing the KG; GEAR (Zhou et al., 2019), originally designed for text-based verification, employs graph-based evidence aggregation with multiple aggregators to capture multi-evidence dependencies, using BERT for language representation and adapted for KG settings following (Kim et al., 2023b); and KGGPT (Kim et al., 2023a), a pioneer work that combines LLMs and KGs through a structured pipeline of Sentence Segmentation, Graph Retrieval, and Logic Inference. Notably, unlike baselines which receive pre-identified claim entities along with the claim as the input, our method processes entities in an end-to-end pipeline. Implementation. For a comprehensive evaluation, we evaluate baselines on three model series: Llama 3 (Meta, 2024), Qwen 2.5 (Qwen, 2024), and GPT4o-mini (OpenAI, 2024). In ClaimPKG, we configure the Specialized LLM to generate multiple pseudo-subgraphs using a beam size of 5. For the Subgraph Retrieval algorithm, we adopt an embedding-based approach leveraging BGE-LargeEN-v1.5 (Xiao et al., 2023) to compute dot-product similarity for the Relation Scoring Function, we set the primary hyperparameters to $k _ { 1 } = 3$ and $k _ { 2 } = 1$ . Detailed justification is provided in Appendix C. # 5.2 Results and Analysis We present the main experimental results in this section and additional findings in Appendix C. (RQ1): How Does ClaimPKG Perform Against the Baselines? Table 1 compares the accuracy $( \% )$ of ClaimPKG with baselines across claim categories of the FactKG. Key observations include: (1) Direct inference using LLMs with CoT reasoning significantly underperforms compared to evidence-based methods, with the best average score reaching only $6 9 . 0 7 \%$ , highlighting that despite LLM advancements, evidence retrieval remains crucial. (2) KG-GPT integrates knowledge graphs with LLMs but its best average score achieves only $7 4 . 7 0 \%$ (Llama-70B Fewshot), falling short of GEAR’s fine-tuned model at $7 6 . 6 5 \%$ . This suggests that while LLMs excel at language tasks, they require specific adaptation for KG processing. (3) ClaimPKG, with the strongest configuration (Llama- $\boldsymbol { \cdot } 3 \mathbf { B } ^ { \ast } +$ Llama-70B) and constrained by Entity-Trie for valid KG entity generation, achieves a 12-point improvement over KGGPT and 9 points over GEAR. It particularly excels in multi-hop reasoning, demonstrating strong performance across Llama-3 and Qwen-2.5 backbones through effective structured evidence retrieval and KG integration. (RQ2): How Do Different Components Affect Performance? To evaluate the impact of each component in ClaimPKG, we conduct ablation studies of the following components, maintaining Llama- ${ \boldsymbol { 3 } } \mathbf { B } ^ { * }$ as the Specialized LLM and Llama70B as the General LLM. Entity-Trie Constraint. We remove the EntityTrie constraint to assess its necessity. Compared to the full setup, this reduces the entity extraction correctness from $100 \%$ to $8 7 . 5 \%$ , and overall performance from $8 4 . 6 4 \%$ to $8 2 . 7 2 \%$ . Table 1: Performance (accuracy $\%$ ) comparison of ClaimPKG with baselines on 5 claim categories of FactKG dataset and their average scores. Specialized LLM. When replacing the specialized LLM with few-shot prompting strategy using Llama-70B, a much larger general-purpose LLM, entity correctness further declines to $8 6 . 5 2 \%$ , leading overall performance to drop to $7 7 . 6 3 \%$ . These results demonstrate that even with examples, general-purpose LLMs struggle to produce outputs with desired graph structure correctly, emphasizing the importance of the specialized LLM in generating pseudo subgraphs. Incomplete Retrieval. Removing the Incomplete Triplet Retrieval function, which forces the retrieval algorithm to only query evidence using complete triplets, causes a significant average performance drop of nearly $20 \%$ compared to the full setup, showing the complete graph structure of input claims is essential for optimal performance. (RQ3): Robustness and Generalization of ClaimPKG? To assess ClaimPKG’s robustness, we vary model backbones, examine zero-shot generalizability, analyze the effect of training data size, and conduct error analysis. Model Backbones. We evaluate different LLM architectures for both Specialized and General LLMs (Table 2). For General LLMs, we test various model sizes (7B to 70B parameters) using retrieved KG triplets as input. For Specialized LLMs, we experiment with different small fine-tuned backbones and few-shot prompt templates (Figure 7), while keeping Llama-3.3-70B as the fixed General LLM. Results in Table 2 show larger General LLMs (GPT-4o-Mini, Llama-3.3-70B) outperform smaller ones (Qwen-2.5-7B, Llama-3.1-8B) by up to 8 points, highlighting model capacity’s role in aggregating subgraph evidence. Notably, a fine-tuned 1B Specialized LLM outperforms the general 70B counterpart, demonstrating fine-tuning’s effectiveness to process graph data. This supports the need to combine powerful General LLMs with adapted Specialized LLMs for optimal performance. Table 2: Performance on Different Backbones. Zero-shot Generalizability. To assess Table 3: Zero-shot transferred performance on other unstructure-based benchmarks on the Support-Predicted samples along with Support Predicted rates. ClaimPKG’s zero-shot generalizability, we test transfer to HoVer (Jiang et al., 2020) and FEVEROUS (Aly et al., 2021) datasets. Using DBpedia (Lehmann et al., 2015) as the knowledge source, we evaluate with trained Specialized LLMs (Llama-3.2-3B and Qwen-2.5-3B) while keeping Llama-3.3-70B as the General LLM. Since external datasets may contain claims outside DBpedia’s coverage, making it difficult to distinguish between knowledge gaps and actual verification failures of ClaimPKG for Refuted cases, we analyze only samples predicted as Supported. As shown in Table 3, ClaimPKG predicts Supported for only $1 2 . 5 \% . 1 5 . 7 \%$ of samples, indicating limited knowledge overlap with DBpedia. However, on these samples, ClaimPKG outperforms Llama3.3-70B’s zero-shot CoT inference by $4 \%$ accuracy on both datasets, demonstrating robust transfer to reasoning patterns in unseen data. Llama-3.2-3B ACC Llama-3.2-3B Coverage Qwen-2.5-3B ACC Qwen-2.5-3B Coverage 90 90 6570758085Avg Performance (%) 84.64 55 80 83.72 82.32 81.90 81.05 80.85 79.35 ===- E= 1 79.81 70 77.62 74.34 60 450 60 30 0.1K 0.5K 2.0K 5.0K 10.0K Sample Size ing data on the Specialized LLM, we vary the number of training samples from 0.1K to 10K, using two configurations: Llama-3.2-3B and Qwen-2.5- 3B as the specialized LLM and keep the General LLM to be Llama-3.3-70B. We evaluate performance based on two metrics: average accuracy on the test set and claim structure coverage on the dev set. As shown in Figure 3, the Specialized LLMs achieve satisfactory accuracy (Llama-3.2- 3B: $7 9 . 3 5 \%$ , Qwen-2.5-3B: $7 7 . 6 2 \%$ ) with just 100 training samples, demonstrating efficiency and low training costs for KG adaptation. While both structure coverage and accuracy improve up to 5K samples, coverage plateaus thereafter, and accuracy begins to decline, indicating overfitting where excessive training data reduces generalizability. # 5.3 Interpretability and Error Analysis ClaimPKG can improve claim verification performance while enhancing interpretability. Representative outputs of ClaimPKG (Figure 12, Appendix E) illustrate its ability to capture claim structure and provide well-grounded justifications. Notably, when refuting claims, it explicitly presents contradicting evidence, ensuring transparent reasoning. To further assess reliability, we conducted a human analysis of 200 incorrect predictions from FactKG, categorizing errors (Figure 13, Appendix E) into: Claim Structure Errors: fail to capture the underlying claim structure; Retrieval Errors: fail to retrieve necessary evidence required for claim verification; and Reasoning Errors: incorrect logical inferences of the general LLM to judge the verdict. Specifically, there are 0 $( 0 \% )$ Claim Structure Errors, 57 $( 2 8 . 5 \% )$ Retrieval Errors, and 143 $( 7 1 . 5 \% )$ Reasoning Errors. These results suggest that, with chances (multiple beams) to generate pseudosubgraphs, the Specialized LLM can effectively capture the structural representation of claims. However, the general-purpose LLM, despite its strong reasoning capabilities, still struggles with certain complex reasoning scenarios that require specific handling. Moreover, retrieval errors highlight cases where additional implicit reasoning is necessary, as we hypothesize that direct subgraph retrieval failed to provide a comprehensive picture of the required evidence. These highlight future improvements, focusing on enhancing retrieval inference and refining reasoning for complex claim verification over structured knowledge. # 5.4 Scalability of ClaimPKG ClaimPKG maintains scalability and adaptability within dynamic knowledge environments. After training the Specialized LLM on a domain (e.g., Wikipedia), the system remains decoupled from the underlying Knowledge Graph (KG). Only the Entity-Trie component interfaces directly with the data. Consequently, when the KG undergoes updates, ClaimPKG requires merely an update of the corresponding entities within the Entity-Trie, ensuring an efficient adaptation process.
Integrating knowledge graphs (KGs) to enhance the reasoning capabilities of large language models (LLMs) is an emerging research challenge in claim verification. While KGs provide structured, semantically rich representations well-suited for reasoning, most existing verification methods rely on unstructured text corpora, limiting their ability to effectively leverage KGs. Additionally, despite possessing strong reasoning abilities, modern LLMs struggle with multi-step modular pipelines and reasoning over KGs without adaptation. To address these challenges, we propose ClaimPKG, an end-to-end framework that seamlessly integrates LLM reasoning with structured knowledge from KGs. Specifically, the main idea of ClaimPKG is to employ a lightweight, specialized LLM to represent the input claim as pseudo-subgraphs, guiding a dedicated subgraph retrieval module to identify relevant KG subgraphs. These retrieved subgraphs are then processed by a general-purpose LLM to produce the final verdict and justification. Extensive experiments on the FactKG dataset demonstrate that ClaimPKG achieves state-of-the-art performance, outperforming strong baselines in this research field by 9%-12% accuracy points across multiple categories. Furthermore, ClaimPKG exhibits zero-shot generalizability to unstructured datasets such as HoVer and FEVEROUS, effectively combining structured knowledge from KGs with LLM reasoning across various LLM backbones.
[ "cs.CL", "cs.AI", "cs.DB" ]