text
stringlengths
2
132k
source
dict
_last_ process in S to write X, it will not see its own id, and will not stop.2 To obtain a name, a process starts at (r,c) = (0,0), and repeatedly executes the splitter at its current position (r,c). If the splitter returns right, it moves to (r,c+1); if down, it moves to (r+1,c); if stop, it stops, and returns the name of its current splitter. We'll show below that a k-by-k grid is enough: no process goes past the end of the grid. We say that a process reaches row r if it ever runs a splitter in row r, and similarly for reaching column c. Claim 4 The number of processes that reach row r+1 is either zero or at most one less than the number of processes that reach row r. Proof Another way to phrase the claim is that if at least one process reaches row r, at least one process stays there. Suppose some process reaches a splitter at (r,c). Then by Claim 1, at least one process at (r,c) stops or goes right. If it stops, we are done. If it doesn't, use induction on c to show that at least one process eventually stops or goes right until it reaches (r,k-1). In either case at least one process doesn't go down, proving the claim. Claim 5 The number of processes that reach column c+1 is either zero or at most one less than the number of processes that reach column c. Proof Use the same proof as
{ "page_id": null, "source": 7357, "title": "from dpo" }
for Claim 4, replacing rows with columns and using Claim 2 instead of Claim 1. Iterating the preceding claims, the number of processes that reach row r is bounded by k-r for r ≤ k, and similarly the number that reach column c is bounded by k-c for c ≤ k. It follows that no process reaches row k or column k, which is why we don't include these in the grid. (Moir and Anderson give a more sophisticated argument that shows the stronger claim that the number of processes that reach any splitter at (r',c') ≥ (r,c) is bounded by k-(r+c); this shows that only those splitters with r+c < k are actually needed, reducing the number of splitters to k(k+1)/2.) In particular, this means that every process stops somewhere in the grid and adopts a name. By Claim 3, these names are all unique. The time complexity of this algorithm is O(k): Each process spends at most 4 operations on each splitter, and no process goes through more than 2k splitters. (The actual bound proved in the paper is 4(k-1)). If we don't know k in advance, we can still guarantee names of size O(k 2) by carefully arranging them so that each k-by-k subgrid contains the first k 2 names. We still have to choose our grid to be large enough for the largest k we might actually encounter. 7.3. Getting to 2k-1 names in bounded space ------------------------------------------- From before, we have an algorithm that will get 2k-1 names for k processes out of N possible processes when run using O(N) space (for the enormous snapshots). To turn this into a bounded-space algorithm, run Moir-Anderson first to get down to k 2 names, then run the previous algorithm (in O(k 2) space) using these new names as
{ "page_id": null, "source": 7357, "title": "from dpo" }
original names. Since we didn't prove anything about time complexity of the humongous-snapshot algorithm, we can't say much about the time complexity of this combined one. Moir and Anderson suggest using an O(Nk 2) algorithm of Borowsky and Gafni to get O(k 4) time for the combined algorithm. Faster algorithms have probably appeared since then. * * * [CategoryDistributedComputingNotes]( 1. Moir and Anderson call them _one-time building blocks_, but the name _splitter_ has become standard in subsequent work. (( 2. It's worth noting that the last process still might not stop, because some later process not in S might overwrite its id first. ((
{ "page_id": null, "source": 7357, "title": "from dpo" }
where to watch all the other Marvel Television properties (Agents of SHIELD, the NetFlix Marvel shows, The Runaways, etc.) and promotional releases like webisodes should be viewed in the order. Relevant non-MCU films like the earlier Spider-Man franchises are also addressed there. Rather than hammer you all with that wall of text, the document may be viewed at the link below, but please leave any questions or comments here (or feel free to DM me directly) and I'll be happy to answer. Narrative Order Explanations & Additional Viewing Captain America: The First Avenger** **2) Iron Man** **3) The Incredible Hulk** **4) Iron Man 2** **5) Thor** **6) The Avengers** **7) Thor: The Dark World** **8) Guardians of the Galaxy** **9) Captain America: The Winter Soldier** **10) Iron Man 3** **11) Guardians of the Galaxy, Vol. 2** * **12) I am Groot, S1 & S2 (D+)-**_(can be skipped)_ **13) Avengers: Age of Ultron** **14) Dr. Strange** **15) Ant-Man** **16) Captain America: Civil War** **17) Black Widow**_(view this post-credit scene after_ _**Avengers: Endgame**_ _to avoid a potential spoiler)_ **18) Black Panther** **19) Spider-Man: Homecoming** **20) Ant-Man & the Wasp**_(view this post-credit scene after_ _**Avengers: Infinity War**_ _to avoid a potential spoiler)_ **21) Thor: Ragnarök** **22) Avengers:
{ "page_id": null, "source": 7359, "title": "from dpo" }
Infinity War** **23) Captain Marvel** **24) Avengers: Endgame** **25) WandaVision, S1 (D+)** **26) Falcon & the Winter Solider, S1 (D+)** **27) Spider-Man: Far from Home** **28) Spider-Man: No Way Home** **29) Hawkeye, S1 (D+)** **30) Guardians of the Galaxy Holiday Special (D+)** **31) Loki, S1 (D+)** **32) Ant-Man & the Wasp: Quantumania** **33) Loki, S2 (D+)** * **34) What If..?, S1 (D+) -**_(can be skipped)_ * **35) What If..?, S2 (D+) -**_(can be skipped)_ * **36) What If..?, S3 (D+) -**_(can be skipped)_ * **37) Deadpool & Wolverine -**_(can be skipped)_ **38) Thor: Love and Thunder** **39) Guardians of the Galaxy, Vol. 3** **40) Black Panther: Wakanda Forever** * **41) Moon Knight, S1 (D+) -**_(can be skipped for now)_ **42) Shang-Chi: Legend of the Ten Rings** **43) Dr. Strange: Multiverse of Madness** **44) Agatha All Along, S1 (D+)** **45) Eternals** * **46) She-Hulk: Attorney at Law, S1 (D+) -**_(can be skipped for now)_ **47) Ms. Marvel, S1 (D+)** **48) The Marvels** **49) Secret Invasion, S1 (D+)** * **50) Echo, S1 (D+) -**_(can be skipped for now)_ **51) Daredevil: Born Again, S1 (D+)** **52) Daredevil: Born Again, S2 (D+)** **53) Captain America: Brave New World** **54) Ironheart, S1 (D+)** **55) Thunderbolts** **56) Fantastic Four: First Steps** Not included yet because it doesn't have a home in the continuity: * **xx) Werewolf by Night**
{ "page_id": null, "source": 7359, "title": "from dpo" }
Title: A Formal Framework for Complex Event Processing URL Source: Markdown Content: # A Formal Framework for Complex Event Processing # Alejandro Grez Pontificia Universidad Católica de Chile, Santiago, Chile Millennium Institute for Foundational Research on Data, Santiago, Chile ajgrez@uc.cl # Cristian Riveros Pontificia Universidad Católica de Chile, Santiago, Chile Millennium Institute for Foundational Research on Data, Santiago, Chile cristian.riveros@uc.cl # Martín Ugarte Millennium Institute for Foundational Research on Data, Santiago, Chile martin@martinugarte.com Abstract Complex Event Processing (CEP) has emerged as the unifying field for technologies that require processing and correlating distributed data sources in real-time. CEP finds applications in diverse domains, which has resulted in a large number of proposals for expressing and processing complex events. However, existing CEP languages lack from a clear semantics, making them hard to understand and generalize. Moreover, there are no general techniques for evaluating CEP query languages with clear performance guarantees. In this paper we embark on the task of giving a rigorous and efficient framework to CEP. We propose a formal language for specifying complex events, called CEL, that contains the main features used in the literature and has a denotational and compositional semantics. We also formalize the so-called selection strategies, which had only been presented as by-design extensions to existing frameworks. With a well-defined semantics at hand, we discuss how to efficiently process complex events by evaluating CEL formulas with unary filters. We start by studying the syntactical properties of CEL and propose rewriting optimization techniques for simplifying the evaluation of formulas. Then, we introduce a formal computational model for CEP, called complex event automata (CEA), and study how to compile CEL formulas with unary filters into CEA. Furthermore, we provide efficient algorithms for evaluating CEA over event streams using constant time per event followed by constant-delay enumeration of the
{ "page_id": null, "source": 7361, "title": "from dpo" }
results. Finally, we gather the main results of this work to present an efficient and declarative framework for CEP. 2012 ACM Subject Classification Information systems → Data streams; Theory of computation → Data structures and algorithms for data management; Theory of computation → Database query languages (principles); Theory of computation → Automata extensions Keywords and phrases Complex event processing, streaming evaluation, constant delay enumeration Digital Object Identifier 10.4230/LIPIcs.ICDT.2019.5 Acknowledgements Cristian and Alejandro have been funded by FONDECYT grant 11150653, and together with Martín they were partially supported by the Millennium Institute for Foundational Research on Data (IMFD). M. Ugarte also acknowledges support from the Brussels Captial Region – Innoviris (project SPICES). We also thank the anonymous referees for their helpful comments. # 1 Introduction Complex Event Processing (CEP) has emerged as the unifying field of technologies for detecting situations of interest under high-throughput data streams. In scenarios like Network Intrusion Detection [ 39 ], Industrial Control Systems [ 29 ] or Real-Time Analytics [ 42 ], CEP > ©Alejandro Grez, Cristian Riveros, and Martín Ugarte; licensed under Creative Commons License CC-BY 22nd International Conference on Database Theory (ICDT 2019). Editors: Pablo Barcelo and Marco Calautti; Article No. 5; pp. 5:1–5:18 Leibniz International Proceedings in Informatics > Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany 5:2 A Formal Framework for Complex Event Processing systems aim to efficiently process arriving data, giving timely insights for implementing reactive responses to complex events. Prominent examples of CEP systems from academia and industry include SASE [ 49 ], EsperTech [ 1], Cayuga [ 26 ], TESLA/T-Rex [ 22 , 23 ], among others (see [ 24 ] for a survey). The main focus of these systems has been in practical issues like scalability, fault tolerance, and distribution, with the objective of making CEP systems
{ "page_id": null, "source": 7361, "title": "from dpo" }
applicable to real-life scenarios. Other design decisions, like query languages, are generally adapted to match computational models that can efficiently process data (see for example [50 ]). This has produced new data management and optimization techniques, generating promising results in the area [49, 1]. Unfortunately, as has been claimed several times [ 27 , 51 , 22 , 11 ] CEP query languages lack a simple and denotational semantics, which makes them difficult to understand, extend or generalize. Their semantics are generally defined either by examples [ 36 , 4, 21 ], or by intermediate computational models [ 49 , 44 , 40 ]. Although there are frameworks that introduce formal semantics (e.g. [ 26 , 15 , 7, 22 , 8]), they do not meet the expectations to pave the foundations of CEP languages. For instance, some of them have unintuitive behavior (e.g. sequencing is non-associative), or are severely restricted (e.g. nesting operators is not supported). One symptom of this problem is that iteration, which is fundamental in CEP, has not yet been defined successfully as a compositional operator. Since iteration is difficult to define and evaluate, it is usually restricted by not allowing nesting or reuse of variables [49, 26]. As a result of these problems, CEP languages are generally cumbersome. The lack of simple denotational semantics makes query languages also difficult to evaluate. A common factor in CEP systems is to find sophisticated heuristics [ 50 , 22 ] that cannot be replicated in other frameworks. Further, optimization techniques are usually proposed at the architecture level [ 37 , 26 , 40 ], which does not allow for a unifying optimization theory. Many CEP frameworks use automata-based models [ 26 , 15 , 7] for query evaluation, but these models are usually complicated [ 40 , 44
{ "page_id": null, "source": 7361, "title": "from dpo" }
], informally defined [ 26 ] or non-standard [ 22 , 5 ]. In practice this implies that, although finite state automata is a recurring approach in CEP, there is no general evaluation strategy with clear performance guarantees. Given this scenario, the goal of this paper is to give solid foundations to CEP systems in terms of query language and query evaluation. Towards these goals, we first provide a formal language that allows for expressing the most common features of CEP systems, namely sequencing, filtering, disjunction, and iteration. We introduce complex event logic (CEL), a logic with well-defined compositional and denotational semantics. We also formalize the so-called selection strategies , an important notion of CEP that is usually discussed directly [ 50 , 26 ] or indirectly [ 15 ] in the literature but has not been formalized at the language level. We then focus on the evaluation of CEL. We propose a formal evaluation framework that considers three building blocks: (1) syntactic techniques for rewriting CEL queries, (2) a well-defined intermediate evaluation model, and (3) efficient translations and algorithms to evaluate this model. Regarding the rewriting techniques, we introduce the notions of well-formed and safe formulas in CEL, and show that these restrictions are relevant for query evaluation. Further, we give a general result on rewriting CEL formulas into the so-called LP-normal form, a normal form for dealing with unary filters. For the intermediate evaluation model, we introduce a formal computational model for the regular fragment of CEL, called complex event automata (CEA). We show that this model is closed under I/O-determinization and provide translations for CEL formulas with unary filters into CEA. More important, we show an efficient algorithm for evaluating CEA with clear performance guarantees: constant time per tuple followed by constant-delay enumeration of the output. Finally,
{ "page_id": null, "source": 7361, "title": "from dpo" }
we bring together our results to present a formal framework for evaluating CEL. A. Grez, C. Riveros, and M. Ugarte 5:3 Related work. Active Database Management Systems (ADSMS) and Data Stream Manage-ment Systems (DSMS) process data streams, and thus they are usually associated with CEP systems. Both technologies aim to execute relational queries over dynamic data [ 19 , 2 , 9 ]. In contrast, CEP systems see data streams as a sequence of events where the arrival order is the guide for finding patterns inside streams (see [ 24 ] for a comparison between ADSMS, DSMS, and CEP). Therefore, DSMS query languages (e.g. CQL [ 10 ]) are incomparable with our framework since they do not focus on CEP operators like sequencing and iteration. Query languages for CEP are usually divided into three approaches [ 24 , 11 ]: logic-based, tree-based and automata-based models. Logic-based models have their roots in temporal logic or event calculus, and usually have a formal, declarative semantics [ 8, 12 , 20 ] (see [ 13 ]for a survey). However, this approach does not include iteration as an operator or it does not model the output explicitly. Furthermore, their evaluation techniques rely on logic inference mechanisms which are radically different from our approach. Tree-based models [ 38 , 35 , 1]have also been used for CEP but their language semantics is usually non-declarative and their evaluation techniques are based on cost-models, similar to relational database systems. Automata-based models are close to what we propose in this paper. Previous proposals (e.g. SASE[ 5 ], NextCEP[ 44 ], DistCED[ 40 ]) do not rely in a denotational semantics; their output is defined by intermediate automata models. This implies that either iteration cannot be nested [ 5 ] or its semantics is confusing [ 44 ]. Other
{ "page_id": null, "source": 7361, "title": "from dpo" }
proposals (e.g. CEDR[ 15 ], TESLA[ 22 ], PBCED[ 7]) are defined with a formal semantics but they do not include iteration. An exception is Cayuga[ 25 ], but its language does not allow reusing variables and sequencing is non-associative, which results in an unintuitive semantics. Our framework is comparable to these systems, but provides a well-defined language that is compositional, allowing arbitrary nesting of operators. Moreover, we present the first evaluation of CEP queries that guarantees constant time per event and constant-delay enumeration of the output. Finally, there has been some research in theoretical aspects of CEP, e.g. in axiomatization of temporal models [ 48 ], privacy [ 32 ], and load shedding [ 31 ]. This literature does not study the semantics and evaluation of CEP and, therefore, is orthogonal to our work. Organization. We give an intuitive introduction to CEP and our framework in Section 2. In Section 3 and 4 we formally present our logic and selection strategies. The syntactic structure of the logic is studied in Section 5. The computational model and compilation of formulas are studied in Section 6. In Section 7 we develop efficient evaluation techniques and in Section 8 we present a framework summarizing our results. Future work is discussed in Section 9. Due to space limitations all proofs are deferred to the journal version. # 2 Events in action We start by presenting the main features and challenges of CEP. The examples used in this section will also serve throughout the paper as running examples. In a CEP setting, events arrive in a streaming fashion to a system that must detect certain patterns [24 ]. For the purpose of illustration assume there is a stream produced by wireless sensors positioned in a farm, whose main objective is to detect fires. As
{ "page_id": null, "source": 7361, "title": "from dpo" }
a first scenario, assume that there are three sensors, and each of them can measure both temperature (in Celsius degrees) and relative humidity (as the percentage of vapor in the air). Each sensor is assigned an id in {0, 1, 2}. The events produced by the sensors consist of the id of the sensor and a measurement of temperature or humidity. In favor of brevity, we write T (id, tmp ) for I C D T 2 0 1 9 5:4 A Formal Framework for Complex Event Processing > type HTHHTTTHH. . . > id 200110110. . . value 25 45 20 25 40 42 25 70 18 index 012345678. . . > Figure 1 A stream Sof events measuring temperature and humidity. “value” contains degrees and humidity for T- and H- events, respectively. an event reporting temperature tmp from sensor with id id , and similarly H(id, hum ) for events reporting humidity. Figure 1 depicts such a stream: each column is an event and the value row is the temperature or humidity if the event is of type T or H, respectively. The patterns to be detected are generally specified by domain experts. For the sake of illustration, assume that the position of sensor 0 is particularly prone to fires, and it has been detected that a temperature measurement above 40 degrees Celsius followed by a humidity measurement of less than 25% represents a fire with high probability. Let us intuitively explain how we can express this as a pattern (also called a formula ) in our framework: ϕ1 = (T AS x ; H AS y) FILTER (x.tmp > 40 ∧ y.hum <= 25 ∧ x.id = 0 ∧ y.id = 0) This formula is asking for two events, one of type temperature ( T ) and
{ "page_id": null, "source": 7361, "title": "from dpo" }
one of type humidity (H). The events of type temperature and humidity are given names x and y, respectively, and the two events are filtered to select only those pairs (x, y ) representing a high temperature followed by a low humidity measured by sensor 0. What should the evaluation of ϕ1 over the stream in Figure 1 return? A first important remark is that event streams are noisy, and one does not expect the events matching a formula to be contiguous in the stream. Then, a CEP engine needs to be able to dismiss irrelevant events. The semantics of the sequencing operator (;) will thus allow for arbitrary events to occur in between the events of interest. A second remark is that in CEP the set of events matching a pattern, called a complex event , is particularly relevant to the end user. Every time that a formula matches a portion of the stream, the final user should retrieve the events that compose that portion of the stream. This means that the evaluation of a formula over a stream should output a set of complex events . In our framework, each complex event will be the set of indexes (stream positions) of the events that witness the matching of a formula. Specifically, let S[i] be the event at position i of the stream S. What we expect for the output of formula ϕ1 consists of sets {i, j } such that S[i] is of type T , S[j] is of type H, i < j, and they satisfy the conditions expressed after the FILTER. By inspecting Figure 1, we can see that the pairs satisfying these conditions are {1, 2}, {1, 8}, and {5, 8}.Formula ϕ1 illustrates the two most elemental features of CEP, namely sequencing and filtering [
{ "page_id": null, "source": 7361, "title": "from dpo" }
24 , 9, 50 , 2, 17 ]. But although it detects a set of possible fires, it restricts the order in which the two events occur: the temperature must be measured before the humidity. Naturally, this could prevent the detection of a fire in which the humidity was measured first. This motivates the introduction of disjunction , another common feature in CEP engines [24, 9]. To illustrate, we extend ϕ1 by allowing events to appear in arbitrary order. ϕ2 = [( T AS x ; H AS y) OR (H AS y ; T AS x)] FILTER (x.tmp > 40 ∧ y.hum <= 25 ∧ x.id = 0 ∧ y.id = 0) The OR operator allows for any of the two patterns to be matched. The result evaluation ϕ2 over S (Figure 1) is the same as the evaluation of ϕ1 plus the complex event {2, 5}.A. Grez, C. Riveros, and M. Ugarte 5:5 The previous formulas show how CEP systems raise alerts when a certain complex event occurs. However, from a wider scope the objective of CEP is to retrieve information of interest from streams. For example, assume that we want to see how does temperature change in the location of sensor 1 when there is an increase of humidity. A problem here is that we do not know a priori the amount of temperature measurements; we need to capture an unbounded amount of events. The iteration operator + (a.k.a. Kleene closure) [ 24 , 9, 30 ] is introduced in most CEP frameworks for solving this problem. This operator introduces many difficulties in the semantics of CEP languages. For example, since events are not required to occur contiguously, the nesting of + is particularly tricky and most frameworks simply disallow this (see [ 49 , 10 ,
{ "page_id": null, "source": 7361, "title": "from dpo" }
26 ]). Coming back to our example, the formula for measuring temperatures whenever an increase of humidity is detected by sensor 1 is: ϕ3 = [H AS x ; (T AS y FILTER y.id = 1)+ ; H AS z] FILTER (x.hum 60 ∧ x.id = 1 ∧ z.id = 1) Intuitively, variables x and z witness the increase of humidity from less than 30% to more than 60%, and y captures temperature measures between x and z. Note that the filter for y is included inside the + operator. Some frameworks allow to declare variables inside a + and filter them outside that operator (e.g. ). Although it is possible to define the semantics for that syntax, this form of filtering makes the definition of nesting + difficult. Another semantic subtlety of the + operator is the association of y to an event. Given that we want to match the event (T AS y FILTER y.id = 1) an unbounded number of times: how should the events associated to y occur in the complex events generated as output? Associating different events to the same variable during evaluation has proven to make the semantics of CEP languages hard to extend. In Section 3, we introduce a semantics that allows nesting + and associate variables (inside + operators) to different events across repetitions. Let us now explain the evaluation of ϕ3 over S (Figure 1). The only two humidity events satisfying the top-most filter are S and S, and the events in between that satisfy the inner filter are S and S. As expected, {3, 4, 6, 7} is part of the output. However, there are other complex events in the output. Since, as discussed, there might be irrelevant events between relevant ones, the semantics of
{ "page_id": null, "source": 7361, "title": "from dpo" }
+ must allow for skipping arbitrary events. This implies that the complex events {3, 6, 7} and {3, 4, 7} are also part of the output. The previous discussion raises an interesting question: are users interested in all complex events? Are some complex events more informative than others? Coming back to the output of ϕ3 ({3, 6, 7}, {3, 4, 7} and {3, 4, 6, 7}), one can easily argue that the largest complex event is more informative since all events are contained in it. The complex events output by ϕ1 deserve a more thorough analysis. In this scenario, the pairs that have the same second component (e.g., {1, 8} and {5, 8}) represent a fire occurring at the same place and time, so one could argue that only one of the two is necessary. For cases like above, it is common to find CEP systems that restrict the output by using so-called selection strategies (see for example [ 49 , 50 , 22 ]). Selection strategies are a fundamental feature of CEP. Unfortunately, they have only been presented as heuristics applied to particular computational models, and thus their semantics are given by algorithms and are hard to generalize. A special mention deserves the next selection strategy (called skip-till-next-match in [ 49 , 50 ]) which models the idea of outputting only those complex events that can be generated without skipping relevant events. Although the semantics of next has been mentioned in previous papers (e.g [15 ]), it is usually underspecified [ 49 , 50 ] or complicates the semantics of other operators [ 26 ]. In Section 4, we formally define a set of selection strategies including next . I C D T 2 0 1 9 5:6 A Formal Framework for Complex Event Processing Before formally presenting our
{ "page_id": null, "source": 7361, "title": "from dpo" }
framework, we illustrate one more common feature of CEP, namely correlation . Correlation is introduced by filtering events with predicates that involve more than one event. For example, consider that we want to see how does temperature change at some location whenever there is an increase of humidity, like in ϕ3. What we need is a pattern where all the events are produced by the same sensor, but that sensor is not necessarily sensor 1. This is achieved by the following pattern: ϕ4 = [H AS x; (T AS y FILTER y.id = x.id )+; H AS z] FILTER (x.hum 60 ∧ x.id = z.id ) Notice that here the filters contain the predicates x.id = y.id and x.id = z.id that force all events to have the same id. Although this might seem simple, the evaluation of formulas that correlate events introduces new challenges. Intuitively, ϕ4 is more complex because the id of x must be remembered in order to compare it with future incoming events. This behavior is clearly not “regular” and it will not be captured by a finite state model [ 33 , 43 ]. In this paper, we study and characterize the regular core of CEP-systems. In sections 6 and 8 we focus on formulas without correlation. As we will see, the formal analysis of this fragment already presents non-trivial challenges, which is why we defer the analysis of formulas like ϕ4 for future work. It is important to mention that the semantics of our language (including selection strategies) is general and includes more involved filters like correlation. # 3 A query language for CEP Having discussed the common operators and features of CEP, we proceed to formally introduce CEL (Complex Event Logic), our pattern language for capturing complex events.
{ "page_id": null, "source": 7361, "title": "from dpo" }
Schemas, Tuples and Streams. Let A be a set of attribute names and D a set of values. A database schema R is a finite set of relation names, where each R ∈ R is associated to a tuple of attributes in A denoted by att (R). If R is a relation name, then an R-tuple is a function t ∶ att (R) → D. The type of an R-tuple t is R, and denote this by type (t) = R. For any relation name R, tuples (R) denotes the set of all possible R-tuples, i.e., tuples (R) = {t ∶ att (R) → D}.Similarly, for any database schema R, tuples (R) = ⋃R∈R tuples (R).Given a schema R, an R-stream S is an infinite sequence S = t0t1 . . . where ti ∈ tuples (R).When R is clear from the context, we refer to S simply as a stream. Given a stream S = t0t1 . . . and a position i ∈ N, the i-th element of S is denoted by S[i] = ti, and the sub-stream titi+1 . . . of S is denoted by Si. Note that we consider that the time of each event is given by its index, and defer a more elaborated model (like ) for future work. Let X be a set of variables. Given a schema R, a predicate of arity n is an n-ary relation P over tuples (R), i.e. P ⊆ tuples (R)n. An atom is an expression P (¯x) where P is an n-ary predicate and ¯x ∈ Xn. As usual, we express predicates as formulas over attributes, and use x.a to reffer to the attribute a of the tuple represented by x. For example, P (x) ∶= x.hum < 30 is an atom and P is
{ "page_id": null, "source": 7361, "title": "from dpo" }
the predicate of all tuples that have a humidity attribute of less than 30 . We consider that checking if a tuple t is in a predicate P takes time O(S tS) , and that every atom P (¯x) has constant size (and thus the size of a formula is independent of the type of predicates). We assume a fixed set of predicates P (i.e. defined by the CEP system). Moreover, we assume that P is closed under intersection, union, and complement, and P contains the predicate PR(x) ∶= type (x) = R for checking if a tuple is an R-tuple for every R ∈ R.A. Grez, C. Riveros, and M. Ugarte 5:7 CEL syntax. Now we proceed to give the syntax of what we call the core of CEL (core-CEL for short), a logic inspired by the operations described in the previous section. This language contains the most essential CEP features. The set of formulas in core-CEL, or core formulas for short, is given by the following grammar: ϕ ∶= R AS x S ϕ FILTER P (¯x) S ϕ OR ϕ S ϕ ; ϕ S ϕ+ where R is a relation name, x is a variable in X and P (¯x) is an atom in P. For example, all formulas in Section 2 are CEL formulas. Throughout the paper we use ϕ FILTER (P (¯x) ∧ Q(¯y)) or ϕ FILTER (P (¯x) ∨ Q(¯y)) as syntactic sugar for (ϕ FILTER P (¯x)) FILTER Q(¯y) or (ϕ FILTER P (¯x)) OR (ϕ FILTER Q(¯y)) , respectively. Unlike existing frameworks, we do not restrict the syntax, allowing for arbitrary nesting (in particular of +). CEL semantics. We proceed to define the semantics of core formulas, for which we need to introduce some further notation. A complex event C is
{ "page_id": null, "source": 7361, "title": "from dpo" }
defined as a non-empty and finite set of indices. As mentioned in Section 2, a complex event contains the positions of the events that witness the matching of a formula over a stream, and moreover, they are the final output of evaluating a formula over a stream. We denote by SCS the size of C and by min (C) and max (C) the minimum and maximum element of C, respectively. Given two complex events C1 and C2, C1 ⋅ C2 denotes the concatenation of two complex events, that is, C1 ⋅ C2 ∶= C1 ∪ C2 whenever max (C1) < min (C2) and is undefined otherwise. In core-CEL formulas, variables are only used to filter and select particular events, i.e. they are not retrieved as part of the output. As examples in Section 2 suggest, we are only concerned with finding the events that compose the complex events, and not which position corresponds to which variable. The reason behind this is that the operator + allows for repetitions, and therefore variables under (possibly nested) + operators would have a special meaning, particularly for filtering. This discussion motivates the following definitions. Given a formula ϕ we denote by var (ϕ) the set of all variables mentioned in ϕ (including filters), and by vdef (ϕ) all variables defined in ϕ by a clause of the form R AS x. Furthermore, vdef +(ϕ) denotes all variables in vdef (ϕ) that are defined outside the scope of all + operators. For example, for ϕ = (T AS x ; (H AS y)+) FILTER z.id = 1 we have that var (ϕ) = {x, y, z }, vdef (ϕ) = {x, y }, and vdef +(ϕ) = {x}. Finally, a valuation is a function ν ∶ X → N. Given a finite set of variables
{ "page_id": null, "source": 7361, "title": "from dpo" }
U ⊆ X and two valuations ν1 and ν2, the valuation ν1[ν2~U ] is defined by ν1ν2~U = ν2(x) if x ∈ U and by ν1ν2~U = ν1(x) otherwise. We are ready to define the semantics of a core-CEL formula ϕ. Given a complex event C and a stream S, we say that C is in the evaluation of ϕ over S under valuation ν (C ∈ ⟦ϕ⟧( S, ν )) if one of the following conditions holds: ϕ = R AS x, C = {ν(x)} , and type (S[ν(x)]) = R. ϕ = ψ FILTER P (x1, . . . , x n), C ∈ ⟦ψ⟧( S, ν ) and (S[ν(x1)] , . . . , S [ν(xn)]) ∈ P . ϕ = ψ1 OR ψ2 and C ∈ ⟦ψ1⟧( S, ν ) or C ∈ ⟦ψ2⟧( S, ν ). ϕ = ψ1 ; ψ2 and there are C1 ∈ ⟦ψ1⟧( S, ν ) and C2 ∈ ⟦ψ2⟧( S, ν ) such that C = C1 ⋅ C2. ϕ = ψ+ and there exists ν′ such that C ∈ ⟦ψ⟧( S, ν [ν′~U ]) or C ∈ ⟦ψ ; ψ+⟧( S, ν [ν′~U ]) ,where U = vdef +(ψ).There are a couple of important remarks here. First, the valuation ν can be defined over a superset of the variables mentioned in the formula. This is important for sequencing (;) because we require the complex events from both sides to be produced with the same valuation. Second, when we evaluate a subformula of the form ψ+, we carry the value of variables defined outside the subformula. For example, the subformula (T AS y FILTER y.id = x.id )+ of ϕ4 does not define the variable x. However, from the definition of the semantics we see I
{ "page_id": null, "source": 7361, "title": "from dpo" }
C D T 2 0 1 9 5:8 A Formal Framework for Complex Event Processing that x will be already assigned (because R AS x occurs outside the subformula). This is precisely where other frameworks fail to formalize iteration, as without this construct it is not easy to correlate the variables inside + with the ones outside, as we illustrate with ϕ4.As previously discussed, in core-CEL variables are just used for comparing attributes with FILTER , but are not relevant for the final output. In consequence, we say that C belongs to the evaluation of ϕ over S (denoted C ∈ ⟦ϕ⟧( S)) if there is a valuation ν such that C ∈ ⟦ϕ⟧( S, ν ). As an example, the complex events presented in Section 2 are indeed the outputs of ϕ1 to ϕ3 over the stream in Figure 1. # 4 Selection strategies Matching complex events is a computationally intensive task. As the examples in Section 2 suggest, the main reason behind this is that the amount of complex events can grow exponentially in the size of the stream, forcing systems to process large numbers of candidate outputs. In order to speed up the matching processes, it is common to restrict the set of results [ 18 , 49 , 50 ]. Unfortunately, most proposals in the literature restrict outputs by introducing heuristics to particular computational models without describing how the semantics are affected. For a more general approach, we introduce selection strategies (or selectors ) as unary operators over core-CEL formulas. Formally, we define four selection strategies called strict ( STRICT ), next ( NXT ), last ( LAST ) and max ( MAX ). STRICT and NXT are motivated by previously introduced operators [ 49 ] under the name of strict-contiguity and skip-till-next-match , respectively. LAST
{ "page_id": null, "source": 7361, "title": "from dpo" }
and MAX are useful selection strategies from a semantic point of view. We define each selection strategy below, giving the motivation and formal semantics. STRICT. As the name suggest, STRICT or strict-contiguity keeps only the complex events that are contiguous in the stream. To motivate this, recall that formula ϕ1 in Section 2 detects complex events composed by a temperature above 40 degrees followed by a humidity of less than 25%. As already argued, in general one could expect other events between x and y. However, it could be the case that this pattern is of interest only if the events occur contiguously in the stream, or perhaps the stream has been preprocessed by other means and irrelevant events have been thrown out already. For this purpose, STRICT reduces the set of outputs selecting only strictly consecutive complex events. Formally, for any CEL formula ϕ we have that C ∈ ⟦STRICT (ϕ)⟧( S, ν ) holds if C ∈ ⟦ϕ⟧( S, ν ) and for every i, j ∈ C, if i < k < j then k ∈ C (i.e., C is an interval). In our running example, STRICT (ϕ1) would only produce {1, 2}, although {1, 8} and {5, 8} are also outputs for ϕ1 over S. NEXT. The second selector, NXT , is similar to the previously proposed operator skip-till-next-match [49 ]. The motivation behind this operator comes from a heuristic that consumes a stream skipping those events that cannot participate in the output, but matching patterns in a greedy manner that selects only the first event satisfying the next element of the query. In the authors gave the definition of this strategy just as “a further relaxation is to remove the contiguity requirements: all irrelevant events will be skipped until the next relevant event is
{ "page_id": null, "source": 7361, "title": "from dpo" }
read ” (*). In practice, skip-till-next-match is defined by an evaluation algorithm that greedily adds an event to the output whenever a sequential operator is used, or adds as many events as possible whenever an iteration operator is used. The fact that the semantics is only defined A. Grez, C. Riveros, and M. Ugarte 5:9 by an algorithm requires a user to understand the algorithm to write meaningful queries. In other words, this operator speeds up the evaluation by sacrificing the clarity of the semantics. To overcome the above problem, we formalize the intuition behind (*) based on a special order over complex events. As we will see later, this allows to speed up the evaluation process as much as skip-till-next-match while providing clear and intuitive semantics. Let C1 and C2 be complex events. The symmetric difference between C1 and C2 (C1 △ C2) is the set of all elements either in C1 or C2 but not in both. We say that C1 ≤next C2 if either C1 = C2 or min (C1 △ C2) ∈ C2. For example, {5, 8} ≤next {1, 8} since the minimum element in {5, 8} △ {1, 8} = {1, 5} is 1, which is in {1, 8}. Note that this is intuitively similar to skip-till-next-match, as we are selecting the first relevant event. An important property is that the ≤next -relation forms a total order among complex events, implying the existence of a minimum and a maximum over any finite set of complex events. I Lemma 1. ≤next is a total order between complex events. We can define now the semantics of NXT : for a CEL formula ϕ we have C ∈ ⟦NXT (ϕ)⟧( S, ν ) if C ∈ ⟦ϕ⟧( S, ν ) and for every complex event C′ ∈ ⟦ϕ⟧(
{ "page_id": null, "source": 7361, "title": "from dpo" }
S, ν ), if max (C) = max (C′) then C′ ≤next C. In other words, C must be the ≤next -maximum match among all matches that end in max (C). In our running example, we have that {1, 8} matches NXT (ϕ1) but {5, 8} does not. Furthermore, {3, 4, 6, 7} matches NXT (ϕ4) while {3, 4, 7} and {3, 6, 7} do not. Note that we compare outputs that have the same final position. This way, complex events are discarded only when there is a preferred complex event triggered by the same last event. LAST. The NXT selector is motivated by the computational benefit of skipping irrelevant events in a greedy fashion. However, from a semantic point of view it might not be what a user wants. For example, if we consider again ϕ1 and stream S (Section 2), we know that every complex event in NXT (ϕ1) will have event 1. In this sense, the NXT strategy selects the oldest complex event for the formula. We argue here that a user might actually prefer the opposite, i.e. the most recent explanation for the matching of a formula. This is the idea captured by LAST . Formally, the LAST selector is defined exactly as NXT , but changing the order ≤next by ≤last : if C1 and C2 are two complex events, then C1 ≤last C2 if either C1 = C2 or max (C1 △ C2) ∈ C2. For example, {1, 8} ≤last {5, 8}. In our running example, LAST (ϕ1) would select the most recent temperature and humidity that explain the matching of ϕ1 (i.e. {5, 8}), which might be a better explanation for a possible fire. Surprisingly, we show in Section 7 that LAST enjoys the same good computational properties as NXT , even though
{ "page_id": null, "source": 7361, "title": "from dpo" }
it does not come from a greedy heuristic like NXT does. MAX. A more ambitious selection strategy is to keep the maximal complex events in terms of set inclusion, which could be naturally more useful because these complex events are the most informative . Formally, given a CEL formula ϕ we say that C ∈ ⟦MAX (ϕ)⟧( S, ν ) holds iff C ∈ ⟦ϕ⟧( S, ν ) and for all C′ ∈ ⟦ϕ⟧( S, ν ), if max (C) = max (C′) then C ⊆ C′. Coming back to ϕ1, the MAX selector will output both {1, 8} and {5, 8}, given that both complex events are maximal in terms of set inclusion. On the contrary, formula ϕ3 produced {3, 6, 7}, {3, 4, 7},and {3, 4, 6, 7}. Then, MAX (ϕ3) will only produce {3, 4, 6, 7} as output, which is the maximal complex event. It is interesting to note that if we evaluate both NXT (ϕ3) and LAST (ϕ3) over the stream we will also get {3, 4, 6, 7} as the only output, illustrating that NXT and LAST also yield complex events with maximal information. We have formally presented the foundations of a language for recognizing complex events, and how to restrict the outputs of this language in meaningful manners. Next we study practical aspects of the CEL syntax that impact how efficiently can formulas be evaluated. I C D T 2 0 1 9 5:10 A Formal Framework for Complex Event Processing # 5 Syntactic analysis of CEL We now study the syntactic form of CEL formulas. We define well-formed and safe formulas, which are syntactic restrictions that characterize semantic properties of interest. Then, we define a convenient normal form and show that any formula can be rewritten in this form. Syntactic restrictions of
{ "page_id": null, "source": 7361, "title": "from dpo" }
formulas. Although CEL has well-defined semantics, there are some formulas whose semantics can be unintuitive. Consider for example the formula ϕ5 = (H AS x) FILTER (y.tmp ≤ 30 ). Here, x will be naturally bound to the only element in a complex event, but y will not add a new position to the output. By the semantics of CEL, a valuation ν for ϕ5 must assign a position for y that satisfies the filter, but such position is not restricted to occur in the complex event. Moreover, y is not necessarily bound to any of the events seen up to the last element, and thus a complex event could depend on future events. For example, if we evaluate ϕ5 over our running example S (Figure 1), we have that {2} ∈ ⟦ϕ5⟧( S), but this depends on the event at position 6. This means that to evaluate this formula we potentially need to inspect events that occur after all events composing the output complex event have been seen, an arguably undesired situation. To avoid this problem, we introduce the notion of well-formed formulas. As the previous example illustrates, this requires defining where variables are bound by a subformula of the form R AS x. The set of bound variables of a formula ϕ is denoted by bound (ϕ) and is recursively defined as follows: bound (R AS x) = {x} bound (ψ FILTER P (¯x)) = bound (ψ) bound (ψ1 OR ψ2) = bound (ψ1) ∩ bound (ψ2) bound (ψ+) = ∅ bound (ψ1 ; ψ2) = bound (ψ1) ∪ bound (ψ2) bound (SEL (ψ)) = bound (ψ) where SEL is any selection strategy. We say that a CEL formula ϕ is well-formed if for every subformula of the form ψ FILTER P (¯x) and every x ∈ ¯x,
{ "page_id": null, "source": 7361, "title": "from dpo" }
there is another subformula ψx such that x ∈ bound (ψx) and ψ is a subformula of ψx. This definition allows for including filters with variables defined in a wider scope. For example, formula ϕ4 in Section 2 is well-formed although it has the not-well-formed formula (T AS y FILTER y.id = x.id )+ as a subformula. One can argue that it would be desirable to restrict the users to only write well-formed formulas. Indeed, the well-formed property can be checked efficiently by a syntactic parser and users should understand that all variables in a formula must be correctly defined. Given that well-formed formulas have a well-defined variable structure, in the future we restrict our analysis to well-formed formulas. Another issue for CEL is that the reuse of variables can easily produce unsatisfiable formulas. For example, the formula ψ = T AS x ; T AS x is not satisfiable (i.e. ⟦ψ⟧( S) = ∅ for every S) because variable x cannot be assigned to two different positions in the stream. However, we do not want to be too conservative and disallow the reuse of variables in the whole formula (otherwise formulas like ϕ2 in Section 2 would not be permitted). This motivates the notion of safe CEL formulas. We say that a CEL formula is safe if for every subformula of the form ϕ1 ; ϕ2 it holds that vdef +(ϕ1) ∩ vdef +(ϕ2) = ∅. For example, all CEL formulas in this paper are safe except for the formula ψ above. The safe notion is a mild restriction to help evaluating CEL, and can be easily checked during parsing time. However, safe formulas are a subclass of CEL and it could be the case that they do not capture the full language. We show that this is not
{ "page_id": null, "source": 7361, "title": "from dpo" }
the case. Formally, we say that two CEL formulas ϕ and ψ are equivalent, denoted by ϕ ≡ ψ, if ⟦ϕ⟧( S) = ⟦ψ⟧( S) for every stream S.A. Grez, C. Riveros, and M. Ugarte 5:11 I Theorem 2. Given a core-CEL formula ϕ, there is a safe formula ϕ′ such that ϕ ≡ ϕ′ and Sϕ′S is at most exponential in SϕS. By this result, we can restrict our analysis to safe formulas without loss of generality. Unfortunately, we do not know if the exponential size of ϕ′ is unavoidable. We conjecture that this is the case, but we do not know yet the corresponding lower bound. LP-normal form. Now we study how to rewrite CEL formulas to simplify the evaluation of unary filters. Intuitively, filter operators in a CEL formula can become difficult to handle for a query engine. To illustrate this, consider again formula ϕ1 in Section 2. Syntactically, this formula states “find an event x followed by an event y, and then check that they satisfy the filter conditions”. However, we would like an execution engine to only consider those events x with id = 0 that represent temperature above 40 degrees. Only afterwards the possible matching events y should be considered. In other words, formula ϕ1 can be restated as: ϕ′ > 1 = [( T AS x) FILTER (x.tmp > 40 ∧ x.id = 0)] ; [( H AS y) FILTER (y.hum <= 25 ∧ y.id = 0)] This example motivates defining the locally parametrized normal form (LP normal form). Let U be the set of all predicates P ∈ P of arity 1 (i.e. P ⊆ tuples (R)). We say that a formula ϕ is in LP-normal form if, for every subformula ϕ′ FILTER P (¯x) of ϕ with P ∈ U, it
{ "page_id": null, "source": 7361, "title": "from dpo" }
holds that ¯x = {x} and ϕ′ = R AS x for some R and x. In other words, all filters containing unary predicates are applied directly to the definitions of their variables. For instance, formula ϕ′ > 1 is in LP-normal form while formulas ϕ1 and ϕ2 are not. Note that non-unary predicates are not restricted, and they can be used anywhere in the formula. One can easily see that having formulas in LP-normal form would be an advantage for an evaluation engine, because it can filter out some events as soon as they arrive. However, formulas that are not in LP-normal form can still be very useful for declaring patterns. To illustrate this, consider the formula: ϕ6 = (T AS x); (( T AS y FILTER x.temp ≥ 40 ) OR (H AS y FILTER x.temp < 40 )) Here, the FILTER operator works like a conditional statement: if the x-temperature is greater than 40 , then the following event should be a temperature, and a humidity event otherwise. This type of conditional statements can be very useful, but also hard to evaluate. Fortunately, the next result shows that one can always rewrite a formula into LP-normal form, incurring in the worst case in an exponential blow-up in the size of the formula. I Theorem 3. Let ϕ be a CEL formula. Then, there is a CEL formula ψ in LP-normal form such that ϕ ≡ ψ, and SψS is at most exponential in SϕS. The importance of this result and Theorem 2 will become clear in the next sections, where we show that safe formulas in LP-normal form have good properties for evaluation. Similar to Theorem 2, we do not know if the exponential blow-up is unavoidable and leave this for future work. # 6 A computational
{ "page_id": null, "source": 7361, "title": "from dpo" }
model for CEL In this section, we introduce a formal computational model for evaluating CEL formulas called complex event automata (CEA for short). Similar to classical database management systems (DBMS), it is useful to have a formal model that stands between the query language I C D T 2 0 1 9 5:12 A Formal Framework for Complex Event Processing and the evaluation algorithms, in order to simplify the analysis and optimization of the whole evaluation process. There are several examples of DBMS that are based on this approach like regular expressions and finite state automata [ 33 , 6], and SQL and relational algebra [ 3, 41 ]. Here, we propose CEA as the intermediate evaluation model for CEL and show later how to compile any (unary) CEL formula into a CEA. As its name suggests, complex event automata (CEA) are an extension of Finite State Automata (FSA). The first difference from FSA comes from handling streams instead of words. A CEA is said to run over a stream of tuples, unlike FSA which run over words of a certain alphabet. The second difference arises directly from the first one by the need of processing tuples, which can have infinitely many different values, in contrast to the finite input alphabet of FSA. To handle this, our model is extended the same way as a Symbolic Finite Automata (SFA) [ 47 ]. SFAs are finite state automata in which the alphabet is described implicitly by a boolean algebra over the symbols. This allows automata to work with a possibly infinite alphabet and, at the same time, use finite state memory for processing the input. CEA are extended analogously, which is reflected in transitions labeled by unary predicates over tuples. The last difference addresses the need to generate complex events instead
{ "page_id": null, "source": 7361, "title": "from dpo" }
of boolean answers. A well known extension for FSA are Finite State Transducers [ 16 ], which are capable of producing an output whenever an input element is read. Our computational model follows the same approach: CEA are allowed to generate and output complex events when reading a stream. Recall from Section 5 that U is the subset of unary predicates of P. Let ●, ○ be two symbols. A complex event automaton (CEA) is a tuple A = (Q, ∆, I, F ) where Q is a finite set of states, ∆ ⊆ Q × (U × {●, ○}) × Q is the transition relation, and I, F ⊆ Q are the set of initial and final states, respectively. Given a stream S = t0t1 . . . , a run ρ of A over S is a sequence of transitions: ρ ∶ q0 P0~m0 ÐÐ→ q1 P1~m1 ÐÐ→ ⋯ Pn~mn ÐÐ→ qn+1 such that q0 ∈ I, ti ∈ Pi and (qi, P i, m i, q i+1) ∈ ∆ for every i ≤ n. We say that ρ is accepting if qn+1 ∈ F and mn = ●. We denote by Run n(A, S ) the set of accepting runs of A over S of length n. Further, events (ρ) is the set of positions where the run marks S, namely events (ρ) = {i ∈ [0, n ] S mi = ●}.Intuitively this means that when a transition is taken, if the transition has the ● symbol then the current position of the stream is included in the output (similar to the execution of a transducer). Note that we require the last position of an accepting run to be marking, as otherwise an output could depend on future events (see the discussion about well-formed formulas in
{ "page_id": null, "source": 7361, "title": "from dpo" }
Section 5). Given a stream S and n ∈ N, we define the set of complex events of A over S at position n as ⟦A⟧n(S) = {events (ρ) S ρ ∈ Run n(A, S )} and the set of all complex events as ⟦A⟧( S) = ⋃n ⟦A⟧n(S). Note that ⟦A⟧( S) can be infinite, but ⟦A⟧n(S) is finite. Consider as an example the CEA A depicted in Figure 2. In this CEA, each transition P (x)S ● marks one H-tuple and each transition P ′(x)S ● marks a T -tuple with temperature bigger than 40 . Note also that the transitions labelled by TRUE S○ allow A to arbitrarily skip tuples of the stream. Then, for every stream S, ⟦A⟧( S) represents the set of all complex events that begin and end with an H-tuple and also contain some of the T -tuples with temperature higher than 40 .It is important to stress that CEA are designed to be an evaluation model for the unary fragment of CEL (a formal definition is presented in the next paragraph). Several computational models have been proposed for complex event processing [ 26 , 40 , 49 , 44 ], but most of them are informal and non-standard extensions of finite state automata. In our framework, we want to take a step back compared to previous proposals and define a simple but powerful model that captures the regular core of CEL. Intuitively, formulas like ϕ1, ϕ2 and ϕ3 in Section 2 can be evaluated using a bounded amount of memory. In contrast, A. Grez, C. Riveros, and M. Ugarte 5:13 q1 q2 q3 P (x) S ● TRUE S ○ P ′(x) S ● TRUE S ○ P (x) S ● > Figure 2 A CEA that can generate an unbounded amount
{ "page_id": null, "source": 7361, "title": "from dpo" }
of complex events. Here P(x)∶= > type (x)=Hand P′(x)∶=type (x)=T∧x.temp >40 . formula ϕ4 needs unbounded memory to store candidate events seen in the past, and thus, it calls for a more sophisticated model (e.g. data automata [ 45 ]). Of course one would like to have a full-fledged model for CEL, but to this end we must first understand the regular fragment. A computational model for the whole CEP logic is left as future work. Compiling unary CEL into CEA. We now show how to compile a well-formed and unary CEL formula ϕ into a CEA Aϕ. Formally, we say a CEL formula ϕ is equivalent to a CEA A if ⟦ϕ⟧( S) = ⟦A⟧( S) for every stream S. A CEL formula ϕ is unary if for every subformula of ϕ of the form ϕ′ FILTER P (¯x), it holds that P (¯x) is a unary predicate (i.e. P (¯x) ∈ U). For example, formulas ϕ1, ϕ2, and ϕ3 in Section 2 are unary, but formula ϕ4 is not (the predicate y.id = x.id is binary). As motivated in Section 2 and 5, despite their apparent simplicity unary formulas already present non-trivial computational challenges (see Section 7). I Theorem 4. For every well-formed formula ϕ in unary core-CEL, there is a CEA Aϕ equivalent to ϕ. Furthermore, Aϕ is of size at most linear in SϕS if ϕ is safe and in LP-normal form, and at most double exponential in SϕS otherwise. The proof of Theorem 4 is closely related with the safeness condition and the LP-normal form presented in Section 5. The construction first converts ϕ into an equivalent CEL formula ϕ′ in LP-normal form (Theorem 3) and then builds an equivalent CEA from ϕ′. Unfortunately, there is an exponential blow-up for converting ϕ into LP-normal form.
{ "page_id": null, "source": 7361, "title": "from dpo" }
However, we show that the output is of linear size if ϕ′ is safe, and of exponential size otherwise, suggesting that restricting the language to safe formulas allows for more efficient evaluation. We have described the compilation process without considering selection strategies. To include them, we extend our notation and allow selection strategies to be applied over CEA. Given a CEA A, a selection strategy SEL ∈ {STRICT , NXT , LAST , MAX } and stream S, the set of outputs ⟦SEL (A)⟧( S) is defined analogously to ⟦SEL (ϕ)⟧( S) for a formula ϕ. Then, we say that a CEA A1 is equivalent to SEL (A2) if ⟦A1⟧( S) = ⟦SEL (A2)⟧( S) for every stream S. I Theorem 5. Let SEL be a selection strategy. For any CEA A, there is a CEA ASEL equivalent to SEL (A). Furthermore, the size of ASEL is, with respect to the size of A, at most linear if SEL = STRICT , and at most exponential otherwise. At first this result might seem unintuitive, specially in the case of NXT , LAST and MAX . It is not immediate (and rather involved) to show that there exists a CEA for these strategies because they need to track an unbounded number of complex events using finite memory. Still, this can be done with an exponential blow-up in the number of states. Theorem 5 concludes our study of the compilation of unary CEL into CEA. We have shown that not only is CEA able to evaluate CEL formulas, but it can also be exploited to evaluate selections strategies. We conclude this section by introducing the notion of I/O-determinism that will be crucial for our evaluation algorithms in the next section. I C D T 2 0 1 9 5:14 A Formal Framework
{ "page_id": null, "source": 7361, "title": "from dpo" }
for Complex Event Processing I/O-deterministic CEA. To evaluate CEA in practice we will focus on the class of the so-called I/O-deterministic CEA (for Input/Output deterministic). A CEA A = (Q, ∆, I, F ) is I/O-deterministic if SIS = 1 and for any two transitions (p, P 1, m 1, q 1) and (p, P 2, m 2, q 2),either P1 and P2 are mutually exclusive (i.e. P1 ∩ P2 = ∅), or m1 ≠ m2. Intuitively, this notion imposes that given a stream S and a complex event C, there is at most one run over S that generates C (thus the name referencing the input and the output). In contrast, the classic notion of determinism would allow for at most one run over the entire stream. I/O-deterministic CEA are important because they allow for a simple and efficient evaluation algorithm (discussed in Section 7). But for this algorithm to be useful, we need to make sure that every CEA can be I/O determinized. Formally, we say that two CEA A1 and A2 are equivalent (denoted A1 ≡ A2) if for every stream S we have ⟦A1⟧( S) = ⟦A2⟧( S). I Proposition 6. For every CEA A there is an I/O-deterministic CEA A′ such that A ≡ A′, and A′ is of size at most exponential over SAS. That is, CEA are closed under I/O-determinization. This result and the compilation process allow us to evaluate any CEL formula by means of I/O-deterministic CEA without loss of generality. # 7 Algorithms for evaluating CEA In this section we show how to efficiently evaluate CEA. We start by formalizing the notion of efficient evaluation in CEP, which has not been formalized before in the CEP literature. Efficiency in CEP. Defining a notion of efficiency for CEP is challenging since we
{ "page_id": null, "source": 7361, "title": "from dpo" }
would like to compute complex events in one pass and using a restricted amount of resources. Streaming algorithms [ 34 , 28 ] are a natural starting point as they usually restrict the time allowed to process each tuple and the space needed to process the first n items of a stream (e.g., constant or logarithmic in n). However, an important difference is that in CEP the arrival of a single event might generate an exponential number of complex events as output. To overcome this problem, we propose to divide the evaluation in two parts: (1) consuming new events and updating the internal memory of the system and (2) generating complex events from the internal memory of the system. We require both parts to be as efficient as possible. First, (1) should process each event in a time that does not depend on the number of events seen in the past. Second, (2) should not spend any time processing and instead it should be completely devoted to generating the output. To formalize this notion, we assume that there is a special instruction yield S that returns the next element of a stream S. Then, given a function f ∶ N → N, a CEP evaluation algorithm with f -update time is an algorithm that evaluates a CEA A over a stream S such that: 1. between any two calls to yield S , the time spent is bounded by O(f (S AS) ⋅ StS) , where t is the tuple returned by the first of such calls, and 2. maintains a data structure D in memory, such that after calling yield S n times, the set ⟦A⟧n(S) can be enumerated from D with constant delay. The notion of constant-delay enumeration was defined in the database community [ 46 , 14
{ "page_id": null, "source": 7361, "title": "from dpo" }
]precisely for defining efficiency whenever generating the output might use considerable time. Formally, it requires the existence of a routine Enumerate that receives D as input and outputs all complex events in ⟦A⟧n(S) without repetitions, while spending a constant amount of time before and after each output. Naturally, the time to generate a complex event C must be linear in SCS. We remark that 1. is a natural restriction imposed in the streaming literature [ 34 ], while 2. is the minimum requirement if an arbitrarily large set of arbitrarily large outputs must be produced . A. Grez, C. Riveros, and M. Ugarte 5:15 > Parser > (Th. 2) > Query Rewrite > (Th. 3) > Compilation > (Th. 4, 5) > Evaluation > (Th. 6, 7) > Complex Events > CEL > Stream > WF and safe LP-normal form CE automaton > Figure 3 Evaluation framework for CEL. Note that the update time O(f (S AS) ⋅ StS) is linear in StS if we consider that A is fixed. Since this is the case in practice (i.e. the automaton is generally small with respect to the stream, and does not change during evaluation), this amounts to constant update time when measured under data complexity (tuples can also be considered of constant size). Efficient evaluation of CEA. Having a good notion of efficiency, we proceed to show how to evaluate CEA efficiently. As it was previously discussed in Section 6, I/O deterministic CEA are specially designed for having CEP evaluation algorithms with linear update time. Furthermore, given that any CEA can be I/O-determinized (Proposition 6), this implies a CEP evaluation algorithm to evaluate any CEA. Unfortunately, the determinization procedure has an exponential blow-up in the size of the automaton, increasing the update time when the automaton is not I/O deterministic.
{ "page_id": null, "source": 7361, "title": "from dpo" }
I Theorem 7. For every I/O-deterministic CEA A, there is a CEP evaluation algorithm with SAS-update time. Furthermore, if A is any CEA, there is a CEP evaluation algorithm with 2SAS-update time. We can further extend the CEP evaluation algorithm for I/O-deterministic CEA to any selection strategies by using the results of Theorem 5. However, by naively applying Theorem 5 and then I/O-determinizing the resulting automaton, we will have a double exponential blow-up. By doing the compilation of the selection strategies and the I/O-determinization together, we can lower the update time. Moreover, and rather surprisingly, we can evaluate NXT and LAST without determinizing the automaton, and therefore with linear update time. I Theorem 8. Let SEL be a selection strategy. For any CEA A, there is a CEP evaluation algorithm for SEL (A). Furthermore, the update time is SAS if SEL ∈ {NXT , LAST }, 2SAS if SEL = STRICT and 4SAS if SEL = MAX . # 8 An evaluation framework for CEL Having all the building blocks, we put all the results in perspectives and show how to evaluate unary CEL formulas. In Figure 3, we show the evaluation cycle of a CEL formula in our framework and how all the results and theorems fit together. To explain this framework, consider a unary CEL formula ϕ (possibly with selection strategies). The process starts in the parser module, where we check if ϕ is well-formed and safe. These conditions are important to ensure that ϕ is satisfiable and make a correct use of variables. Note that a CEP system could translate unsafe formulas (Theorem 2), incurring however in an exponential blow-up.. The next module rewrites a well-formed and safe formula ϕ into LP-normal form by using the rewriting process of Theorem 3. In the worst case this produces
{ "page_id": null, "source": 7361, "title": "from dpo" }
an exponentially larger formula. To avoid this, in many cases one can apply local rewriting rules [3 , 41 ]. For example, in Section 2 we converted ϕ1 into ϕ′ > 1 by applying a filter push , avoiding the exponential I C D T 2 0 1 9 5:16 A Formal Framework for Complex Event Processing blow-up of Theorem 3. Unfortunately, we cannot apply this over formulas like ϕ6 in Section 5. Nevertheless, formulas like ϕ6 are rather uncommon in practice and local rewriting rules will usually produce LP-formulas of polynomial size. The third module receives a formula in LP-normal form and builds a CEA Aϕ of polynomial size (Theorem 4 and 5). Then, the last module runs Aϕ over the stream by using our CEP evaluation procedure for I/O deterministic CEA (Theorem 7). If there is no selection strategy, Aϕ must be determinized before running the CEP evaluation algorithm. In the worst case, this determinization is exponential in Aϕ, nevertheless, in practice the size of Aϕ is rather small. If a selection strategy SEL is used, we can use the algorithms of Theorem 8 for evaluating SEL (Aϕ), having a similar update time than evaluating Aϕ alone. It is worth mentioning that evaluating NXT (Aϕ) or LAST (Aϕ) has even better performance than evaluating Aϕ directly, given that the update time is linear in the size of Aϕ. # 9 Future work This paper settles new foundations for CEP systems, stimulating new research directions. In particular, a natural next step is to study the evaluation of non-unary CEL formulas. This requires new insight in rewriting formulas and a more powerful computational model with CEP evaluation algorithms. Another relevant problem is to understand the expressive power of different fragments of CEL and the relationship between the different operators. In
{ "page_id": null, "source": 7361, "title": "from dpo" }
this same direction, we envision as future work a generalization of the concept behind selection strategies, together with a thorough study of their expressive power. Finally, we have focused on the fundamental features of CEP languages, leaving other features outside to keep the language and analysis simple. These features include correlation, time windows, aggregation, consumption policies, among others. We plan to extend CEL gradually with these features to establish a more complete and formal framework for CEP. References > 1Esper Enterprise Edition website. Accessed on 2018-01-05. URL: . > 2D. Abadi, D. Carney, U. Çetintemel, M. Cherniack, C. Convey, C. Erwin, E. Galvez, M. Hatoun, A. Maskey, A. Rasin, A. Singer, M. Stonebraker, N. Tatbul, Y. Xing, R. Yan, and S. Zdonik. Aurora: A Data Stream Management System. In SIGMOD , 2003. > 3Serge Abiteboul, Richard Hull, and Victor Vianu. Foundations of databases: the logical level .Addison-Wesley, 1995. > 4Asaf Adi and Opher Etzion. Amit-the situation manager. VLDB Journal , 2004. > 5Jagrati Agrawal, Yanlei Diao, Daniel Gyllstrom, and Neil Immerman. Efficient pattern matching over event streams. In SIGMOD , 2008. > 6Alfred V. Aho. Algorithms for Finding Patterns in Strings. In Handbook of Theoretical Computer Science . Elsevier, 1990. > 7Mert Akdere, Uˇ gur Çetintemel, and Nesime Tatbul. Plan-based complex event detection across distributed sources. VLDB , 2008. > 8Darko Anicic, Paul Fodor, Sebastian Rudolph, Roland Stühmer, Nenad Stojanovic, and Rudi Studer. A rule-based language for complex event processing and reasoning. In RR , 2010. > 9Arvind Arasu, Brian Babcock, Shivnath Babu, Mayur Datar, Keith Ito, Itaru Nishizawa, Justin Rosenstein, and Jennifer Widom. STREAM: The Stanford Stream Data Manager (Demonstration Description). In SIGMOD , 2003. > 10 Arvind Arasu, Shivnath Babu, and Jennifer Widom. The CQL Continuous Query Language: Semantic Foundations and Query Execution. The VLDB Journal ,
{ "page_id": null, "source": 7361, "title": "from dpo" }
2006. A. Grez, C. Riveros, and M. Ugarte 5:17 11 Alexander Artikis, Alessandro Margara, Martin Ugarte, Stijn Vansummeren, and Matthias Weidlich. Complex Event Recognition Languages: Tutorial. In DEBS , pages 7–10. ACM, 2017. 12 Alexander Artikis, Marek Sergot, and Georgios Paliouras. An event calculus for event recognition. IEEE Transactions on Knowledge and Data Engineering , 27(4):895–908, 2015. 13 Alexander Artikis, Anastasios Skarlatidis, François Portet, and Georgios Paliouras. Logic-based event recognition. The Knowledge Engineering Review , 27(4):469–506, 2012. 14 Guillaume Bagan, Arnaud Durand, and Etienne Grandjean. On Acyclic Conjunctive Queries and Constant Delay Enumeration. In CSL , 2007. 15 Roger S. Barga, Jonathan Goldstein, Mohamed H. Ali, and Mingsheng Hong. Consistent Streaming Through Time: A Vision for Event Stream Processing. In CIDR , 2007. 16 Jean Berstel. Transductions and context-free languages . Springer-Verlag, 2013. 17 Alejandro Buchmann and Boris Koldehofe. Complex event processing. IT-Information Technology Methoden und innovative Anwendungen der Informatik und Informationstechnik ,2009. 18 Jan Carlson and Björn Lisper. A resource-efficient event algebra. Science of Computer Programming , 2010. 19 Jianjun Chen, David J. DeWitt, Feng Tian, and Yuan Wang. NiagaraCQ: A Scalable Continuous Query System for Internet Databases. In SIGMOD , 2000. 20 Federico Chesani, Paola Mello, Marco Montali, and Paolo Torroni. A logic-based, reactive calculus of events. Fundamenta Informaticae , 105(1-2):135–161, 2010. 21 Gianpaolo Cugola and Alessandro Margara. Raced: an adaptive middleware for complex event detection. In Middleware , 2009. 22 Gianpaolo Cugola and Alessandro Margara. TESLA: a formally defined event specification language. In DEBS , 2010. 23 Gianpaolo Cugola and Alessandro Margara. Complex Event Processing with T-REX. The Journal of Systems and Software , 2012. 24 Gianpaolo Cugola and Alessandro Margara. Processing flows of information: From data stream to complex event processing. ACM Computing Surveys (CSUR) , 2012. 25 Alan Demers, Johannes Gehrke, Mingsheng Hong, Mirek Riedewald,
{ "page_id": null, "source": 7361, "title": "from dpo" }
and Walker White. A general algebra and implementation for monitoring event streams. Technical report, Cornell University, 2005. 26 Alan Demers, Johannes Gehrke, Mingsheng Hong, Mirek Riedewald, and Walker White. Towards expressive publish/subscribe systems. In EDBT , 2006. 27 Antony Galton and Juan Carlos Augusto. Two approaches to event definition. In DEXA ,2002. 28 Lukasz Golab and M Tamer Özsu. Issues in data stream management. Sigmod Record , 2003. 29 Mikell P Groover. Automation, production systems, and computer-integrated manufacturing .Prentice Hall, 2007. 30 Daniel Gyllstrom, Jagrati Agrawal, Yanlei Diao, and Neil Immerman. On supporting kleene closure over event streams. In ICDE 2008 , pages 1391–1393. IEEE, 2008. 31 Yeye He, Siddharth Barman, and Jeffrey F. Naughton. On Load Shedding in Complex Event Processing. In ICDT , pages 213–224, 2014. 32 Yeye He, Siddharth Barman, Di Wang, and Jeffrey F Naughton. On the complexity of privacy-preserving complex event processing. In PODS , pages 165–174, 2011. 33 John E. Hopcroft and Jeffrey D. Ullman. Introduction to Automata Theory, Languages and Computation . Addison-Wesley, 1979. 34 Elena Ikonomovska and Mariano Zelke. Algorithmic Techniques for Processing Data Streams. Dagstuhl Follow-Ups , 2013. 35 Mo Liu, Elke Rundensteiner, Kara Greenfield, Chetan Gupta, Song Wang, Ismail Ari, and Abhay Mehta. E-cube: multi-dimensional event sequence analysis using hierarchical pattern query sharing. In SIGMOD , pages 889–900, 2011. I C D T 2 0 1 9 5:18 A Formal Framework for Complex Event Processing 36 D Luckham. Rapide: A language and toolset for simulation of distributed systems by partial orderings of events, 1996. 37 Masoud Mansouri-Samani and Morris Sloman. GEM: A generalized event monitoring language for distributed systems. Distributed Systems Engineering , 1997. 38 Yuan Mei and Samuel Madden. Zstream: a cost-based query processor for adaptively detecting composite events. In SIGMOD , pages 193–206. ACM, 2009. 39 Biswanath Mukherjee,
{ "page_id": null, "source": 7361, "title": "from dpo" }
L Todd Heberlein, and Karl N Levitt. Network intrusion detection. IEEE network , 1994. 40 Peter Pietzuch, Brian Shand, and Jean Bacon. A framework for event composition in distributed systems. In Middleware , 2003. 41 Raghu Ramakrishnan and Johannes Gehrke. Database management systems (3 ed.) . McGraw-Hill, 2003. 42 BS Sahay and Jayanthi Ranjan. Real time business intelligence in supply chain analytics. Information Management & Computer Security , 2008. 43 Jacques Sakarovitch. Elements of automata theory . Cambridge University Press, 2009. 44 Nicholas Poul Schultz-Møller, Matteo Migliavacca, and Peter Pietzuch. Distributed complex event processing with query rewriting. In DEBS , 2009. 45 Luc Segoufin. Automata and logics for words and trees over an infinite alphabet. In CSL ,2006. 46 Luc Segoufin. Enumerating with constant delay the answers to a query. In ICDT 2013 , pages 10–20, 2013. 47 Margus Veanes. Applications of symbolic finite automata. In CIAA , 2013. 48 Walker White, Mirek Riedewald, Johannes Gehrke, and Alan Demers. What is next in event processing? In PODS , pages 263–272, 2007. 49 Eugene Wu, Yanlei Diao, and Shariq Rizvi. High-performance complex event processing over streams. In SIGMOD , 2006. 50 Haopeng Zhang, Yanlei Diao, and Neil Immerman. On complexity and optimization of expensive queries in complex event processing. In SIGMOD , 2014. 51 Detlef Zimmer and Rainer Unland. On the semantics of complex events in active database management systems. In ICDE , 1999.
{ "page_id": null, "source": 7361, "title": "from dpo" }
Title: Improving automated GUI testing by learning to avoid infeasible tests URL Source: Markdown Content: This is a repository copy of Improving automated GUI testing by learning to avoid infeasible tests .White Rose Research Online URL for this paper: Version: Accepted Version Proceedings Paper: Walkinshaw, N. (2020) Improving automated GUI testing by learning to avoid infeasible tests. In: Proceedings of the 2020 IEEE International Conference On Artificial Intelligence Testing (AITest). 2020 IEEE International Conference On Artificial Intelligence Testing (AITest), 03-06 Aug 2020, Oxford, UK. IEEE , pp. 107-114. ISBN 9781728169859 © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Reproduced in accordance with the publisher's self-archiving policy. > eprints@whiterose.ac.uk > > Reuse > Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. > Takedown > If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request. # Improving Automated GUI Testing by Learning to Avoid Infeasible Tests Neil Walkinshaw Department of Computer Science The University of Sheffield, UK Email: n.walkinshaw@sheffield.ac.uk Abstract —Most modern end-user software applications
{ "page_id": null, "source": 7361, "title": "from dpo" }
are controlled through a graphical user interface (GUI). When it comes to automated test selection, however, GUIs present two major challenges: (1) It is difficult to automatically identify feasible, non-trivial sequences of GUI interactions (test cases), and (2) each attempt at a test case execution can take a long time, eliminating the possibility of rapidly attempting large numbers of alternatives. In this paper we present an iterative approach that infers state-machine models from previous test executions, and increases the utility of tests by learning which sequences to avoid. The approach is evaluated on a selection of Java applications, and the results indicate that our approach is successful at achieving higher code coverage and longer sequences than the state of the art, albeit with a time-overhead caused by the repeated invocation of a Machine Learner. I. I NTRODUCTION GUIs raise several challenges when it comes to automated software testing. They can comprise a large variety of windows with different combinations of of widgets (e.g. buttons, check-boxes, text-entry fields, etc.), where the appearance or contents of certain windows and widgets can depend upon previous inputs. Accordingly, test cases that seek to fully explore the behaviour of the underlying system can be required to include complex sequences of selections and inputs. There exists a very large number of GUI-testing tools, spanning mobile apps, web-apps, and desktop GUIs. The goal is the same as with any testing technique - to identify a manageably small set of test cases that is sufficiently rigorous and diverse to expose any faults. However, the challenge with GUIs is especially challenging because (1) GUI-based applications can take a long time to initialise and execute, (2) the GUI interface is invariably dynamic – the input ‘surface’ can change from one interaction to the next, and as a result (3) ‘test
{ "page_id": null, "source": 7361, "title": "from dpo" }
cases’ amount to potentially complex sequences of widget clicks, drags, gestures, etc. In this paper we investigate a solution for scenarios where there is no capability of analysing and querying the run-time GUI state. We may, for example, be interested in testing an application across a multitude of platforms. We consider the scenario where we are able to supply the SUT with a range of inputs (in a programmable way via some testing interface). We also presume that we start from a possibly large set of potential test sequences, which may arise from some GUItar-style analysis of the SUT , be randomly generated, they may be a product of fuzzing . In any case, a large proportion of these test cases are liable to be trivially invalid, and lead to (expensive) application re-starts after only few interactions. We do not assume that we are able to query or scrutinise the state of the system under test during a test execution (e.g. to determine which inputs are feasible at any given point). Our solution is superficially similar to existing solutions , . We use a state machine inference to infer models of what has been tested so far and use this model to inform the selection of new test cases. However, in our scenario it is especially important that the inferred model is able to discriminate between sequences of events that have been explored so far, and sequences of events that have been explored but should be avoided in future executions because they will lead to some undesirable outcome (e.g. a time-out). To address this, our paper makes the following contributions: > • We show how GUI test-executions can be fed to the EDSM state machine inference algorithm 5, in a way that takes advantage of
{ "page_id": null, "source": 7361, "title": "from dpo" }
its capacity to distinguish between positive and negative examples to produce models that are capable of distinguishing between interaction sequences that have been attempted, and sequences that are likely to be invalid or lead to time-outs. > • We present an algorithm that uses the resulting model to filter-out and prioritise GUI test cases. > • We have developed an openly available implementation of the approach. > • We present an empirical evaluation on five GUI-based Java applications, which demonstrates that the use of our approach leads to longer interactions and greater code coverage than a quasi-random use of GUItar. II. B ACKGROUND AND RELATED WORK We start with a brief introduction to the landscape of automated GUI testing. Since state machine inference has played a reasonably prominent role in GUI testing (and forms the basis for our approach too) we provide a brief introduction to state machines and state machine inference. We then discuss some of the specific ways in which state machine inference has been used for GUI testing, and discuss some of the weaknesses (or missed opportunities) of these approaches. A. Automated GUI Testing There has been a gradual evolution of GUI-testing tools. Early GUI-testing tools, most notably GUItar and its mobile version MobiGuitar , worked in two phases. In the first phase, an analysis of the source code, perhaps enhanced by a dynamic analysis, would construct a model of capturing the possible range of interactions with the SUT. This is then followed by a test selection process , where the goal is to meet various objectives - to achieve maximum coverage of the model (and code), with the fewest possible number of test cases (because application restarts for new tests are especially time-consuming). In GUITar and MobiGUItar the model of the target GUI was encoded
{ "page_id": null, "source": 7361, "title": "from dpo" }
as an Event Flow Graph . This is a graph that contains labels corresponding to GUI events, where transitions indicate the order in which these events are deemed to be possible. Definition 1: An EFG is a directed graph (V, E, I ), where each element v ∈ V corresponds to an event in the GUI. E is a set of edges (vi, v j ), indicating that event vi can be succeeded by vj . I ⊆ V is a set of initial vertices, indicating that these can act as starting points for a GUI interaction. One major limitation of this approach is the fact that the model constructed in the first phase is not entirely accurate. The use of static analysis invariably means that the model will indicate that certain sequences of events should be possible when they are not. As a result, such approaches can end up attempting large numbers of test cases that are futile , , leading to many re-starts of the application without achieving significant coverage . In recent years, GUI-testing tools have worked around the limitations posed by such a-priori models by exploiting the emergence of increasingly sophisticated technical facilities within APIs to query and log GUI-interactions. As a result a large range of Android-based testing tools , , , , , have emerged, which take advantage of such capabili-ties, and are able to successfully generate long, exploratory test sequences. Similarly for Windows applications, Testar , has emerged as a leading tool, able to query the state of a GUI during execution via the Windows accessibility layer. This progress does however come at a cost. These tech-niques and tools tend to be tied to the underlying platform for which they have been developed, vulnerable to any sudden changes to interfaces within
{ "page_id": null, "source": 7361, "title": "from dpo" }
the target API or OS platform. They can only be re-engineered to an alternative platform if it offers a comparable interface with runtime access to the underlying GUI state. B. State Machines Definition 2: A Deterministic Finite Automaton (DFA) is a quituple (Q, Σ, ∆, F, q 0), where Q is a finite set of states, Σ is a finite alphabet, ∆ : Q × Σ → Q is a partial function, F ⊆ Q is a set of final (accepting) states, and q0 ∈ Q. A DFA can be visualised as a directed graph, where states are the nodes, and transitions are the edges between them, labelled by their respective alphabet elements. Algorithm 1: Basic State Merging Algorithm Input : Two samples S+ and S− containing positive and negative examples respectively Result : A DFA consistent with S+ and S− > 1 Infer( S+, S −) begin > 2 P T A ←initialize( S+, S −); // Let N denote the number of states in the PTA > 3 π ← {{ 0}, {1}, . . . , {N − 1}} ; > 4 while (Bi, B j ) ← ChoosePair( π) do > 5 πnew ← Merge( π, B i, B j ); > 6 if Compatible( P T A/π new , S −) then > 7 π ← πnew ; > 8 return P T A/π // Merge pair of states and ensure that the result is deterministic > 9 Merge( π, B i, B j )begin > 10 π ← π \ { Bi, B j } ∪ { Bi ∪ Bj }; > 11 while (Bk, B l) ←FindNonDeterminism (π, B i, B j ) do > 12 π ←Merge( π, B k, B l); When discussing the behaviour of a DFA, we are referring
{ "page_id": null, "source": 7361, "title": "from dpo" }
to the possible (and impossible) sequences of elements in Σ (denoted Σ∗). The set of all possible sequences in a DFA is referred to as its language . To define this formally, we draw on the inductive definition for an extended transition function ˆδ used by Hopcroft et al. . For a state q and a string w,the extended transition function ˆδ returns the state p that is reached when starting in state q and processing sequence w.For the base case ˆδ(q, ǫ ) = q. For the inductive case, let w be of the form xa , where a is the last element, and x is the prefix. Then ˆδ(q, w ) = δ(ˆδ(q, x ), a ). Definition 3: The language L(A) of a DFA A is the set of strings reaching any state in A from its initial state. L(A) is defined as follows: L(A) = {w ∈ Σ∗ | ˆδ(q0, w ) ∈ F }.The complement of a language L is denoted LC (i.e. the set Σ∗ \ L of strings that do not belong to L). Sequences w ∈ Σ∗ for which ˆδ(q0, w ) is not defined are considered to be rejected by the automaton. C. State Machine Inference Although the challenge of inferring an exact state machine has been shown to be NP-hard , several algorithms have emerged that have been shown to be capable of inferring reasonably accurate approximations. It has been shown that, given a sufficiently diverse set of positiven and negative ex-amples, it is possible to infer a state machine that is ‘Probably Approximately Correct’ in polynomial time , . Amongst a variety of inference algorithms, the State Merg-ing algorithm , is is particularly prevalent in Software Engineering , , , , , , , , , ,
{ "page_id": null, "source": 7361, "title": "from dpo" }
, , and is detailed in Algorithm 1. In essence, the approach starts from two sets of sequences: S+ - a set of sequences that are accepted by the subject, and S− - a set of sequences that are not. From these it constructs a tree-shaped automaton (a ‘prefix-tree automaton’) that exactly represents a given set of sequences (line 2). It then adopts some form of heuristic to select which pairs states to merge with each other (lines 4-5, 9-12), thereby producing a state machine that generalises on the initial set of sequences. If the resulting machine correctly rejects all sequences in S− (it accepts all strings in S+ by construction), then the merge is accepted and the process iterates (lines 6-7) until no further merges can be identified and the final machine is returned (line 8). Software engineering applications, including the various inference-based testing approaches, have largely been based on situations where there are no ‘negative’ sequences for S−,but only instances of observed execution traces belonging to S+. In such situations, to prevent the merging process from over-generalising to produce a single-state machine that trivially accepts all sequences, it is necessary to constrain the ChoosePair function. To this end, techniques such as k-tails , tend to only select merge-candidates if their outgoing paths fulfill some sort of ‘similarity’ criterion (e.g. outgoing paths must match each other up to some specified length k). D. State Machine Inference and GUI Testing State machine inference has been successfully applied to test sequential software systems that are not GUI-driven (notably network protocols) in the past . There is a natural link between GUIs and state machines, which has been the subject of an extensive amount of research , and which makes GUIs apparently ideal candidates for testing approaches that incorporate state machine
{ "page_id": null, "source": 7361, "title": "from dpo" }
inference. The idea was first ex-plored by Mariani et al. with the AutoBlackTest approach , which used QLearning to infer behaviours from the GUI under test as it is being tested, and to then use this as a basis for selecting new inputs. Subsequently, Choi et al. developed the SwiftHand Android testing tool (which is based upon a state merging algorithm). The subsequent FSMDroid was also based on a similar premise, whilst including stochastic weights in the state machine. The evidence to corroborate the performance of these ap-proaches is mixed. In the domain of Android testing, a 2015 study saw techniques such as SwiftHand comprehensively outperformed by the very Android Monkey tool. Studies that examine the relative performance of AutoBlackTest and GUITar subject to the caveat that their relative performance can vary significantly depending on configurations and subject systems . However results by FSMDroid appear to be promising (outperforming successful tools such as Sapienz ). Invariably, when comparing techniques it can be difficult to disentangle performance gains that are due to tool imple-mentation details from gains that are due to the underlying technique. > SUT > Radio > Radio > Input > Status text > NSWindow 0 0 0 > Button > Radio > Radio NSWindow 0 0 0 > Button Button > Button Radio > Radio Button > GUI Testing Framework > Interaction Layer > Selection Framework > State Machine Inference > Test Selector > Test generator > 1 > 2 > 6 > Successful Tests > Failed tests > Logger > Candidate Tests > EFG > 3 > 4 > 5 > Fig. 1. Testing set up. One characteristic that applies to all of these techniques is that they build a model from a single set of test executions (or dynamic traces obtained
{ "page_id": null, "source": 7361, "title": "from dpo" }
before testing). All test executions are treated the same – regardless of whether they terminated successfully or ended in a time-out and had to be aborted. In the terms of the state merging algorithm presented in Algorithm 1, all of the traces belong to set S+ and S− is empty. This severely hampers model inference; without any nega-tive examples, the inference algorithm is vulnerable to over-generalisation , . In a pathological worst-case this would result in a single-state DFA where all sequences lead to the same positive outcome. To avoid this scenario, techniques are reduced to either crudely guessing whether two states are equivalent (e.g. by means of the k-tails heuristic , ), or by using run-time GUI querying APIs to inspect the current GUI state. Aside from the problems of inference-accuracy, there is also a consequence for the semantics of the inferred model. Without any negative information the inferred models tend to be ‘prefix-closed’, meaning that any sequence and prefix thereof through the inferred model is valid. This leads to a simplified form of DFA (formally a Labelled Transition System (LTS) ) where F = Q; every state is a potential final state, and there is no ability to discriminate between sequences that are valid, and those that should be avoided (e.g. because they lead to costly time-outs and restarts ). III. I NFERENCE WITH NEGATIVE GUI T EST SEQUENCES This paper is based on the observation that the context of GUI-testing offers plenty of sources of ‘negative’ information. In an inference-supported testing context, these sources of information can be easily incorporated into well-established inference techniques. This raises the possibility of inferring more accurate models, and using these models to suppor the selection of better test-cases. A. Testing Scenario We demonstrate this process with respect to the traditional “gui-ripping”
{ "page_id": null, "source": 7361, "title": "from dpo" }
testing scenario . For the sake of practicality, we seek to limit our practical requirements where possible. We describe the key components (and distinguish between those that are absolutely necessary and those that are desirable) with respect to the GUI-testing setup illustrated in the grey-shaded elements in Figure 1. The most important requirement is access to a GUI testing setup that is able to interact with the SUT (we refer to this as the “interaction layer” - 1). This is what enables us to supply test sequences (sequences of interactions) and for them to be applied as GUI interactions with the SUT. One particularly important requirement is that we are able to surmise whether or not an attempted test execution has completed or not. In GUITar, for example, there is a logging facility (2) that records, for each test execution, which events were executed and at what point (if any) an attempted interaction failed or resulted in a time-out. We do not assume run-time access to the test execution, or an ability to query the GUI during test executions. We assume that there is some test generator (3) by which to generate a set of potential test sequences (4). Since we are operating in a “gui-ripping” setting, we assume that the ripped information (obtained by some mixture of static and dynamic analysis of the SUT) is available in the form of an EFG (5 - see also Definition 1). Although the EFG itself is not essential to our approach, it can be helpful during state machine inference. An EFG-supported extension to the state merging algorithm in Algorithm 1 is provided in Appendix A. B. Adaptive Test Selection with Input from Negative Inputs The goal is to identify a manageable sub-set of test cases that will, reach the widest possible range
{ "page_id": null, "source": 7361, "title": "from dpo" }
of GUI functionali-ties. This is challenging because test cases can require a long time to execute. Although GUI Rippers may have the EFG graph, these can be of limited practical use because of their scale, and the fact that many paths through them are infeasible in practice. For example, the ripped EFG for the smallest of our case study systems (Rachota) contains 149 possible events (nodes) and 1344 edges connecting them. We use state machine inference to address this problem by developing a test selection framework (6 in Figure 1). As with previous learning-based GUI testing approaches , , , our approach uses the inferred model to identify test cases. However, there are two key differences with our approach – one obvious, the other subtle. The obvious difference is that our approach explicitly incorporates negative information, by learning models that distinguish between failures to execute a test properly, and test executions that terminated without problems. The more subtle difference is that we frame our approach as a ‘test selection’ approach; we use the inferred model to filter an existing set of proposed test sequences, where the construction of these sequences is delegated to some external test-generation algorithm (for example, an existing EFG-based test generator ). The details of our test selection process are presented in Algorithm 2. The approach takes three inputs: A set of candidate test cases T (e.g. as generated by some GUI testing framework), a number of iterations i representing the number of test-inference loops to be run, and j the number of tests to be generated per iteration (the choice of values for these Algorithm 2: Test Selection Algorithm Input : A set of candidate test cases T , iterations i,number of tests per iteration j. Result : A set F inalT estSet ⊆ T ,
{ "page_id": null, "source": 7361, "title": "from dpo" }
where |F inalT estSet | = i ∗ j > 1 Select( T, i, j ) begin > 2 F inalT estSet ← ∅ ; > 3 S+ ← ∅ ; > 4 S− ← ∅ ; > 5 for 1 to i do > 6 P otential ← T \ F inalT estSet ; > 7 if S+ ∪ S− 6 = ∅ then > 8 DF A ← inf erDF A (S+, S −); > 9 P otential ← P otential ∩ L(DF A ); > 10 T ests ← randomSelection (P otential, j ); > 11 F inalT estSet ← F inalT estSet ∪ T ests ; > 12 for t ∈ T ests do > 13 e ← execute (t); > 14 if e = t then > 15 S+ ← S+ ∪ { e}; > 16 else > 17 S− ← S− ∪ { e}; > 18 return F inalT estSet ; parameters will be discussed after we briefly present the various steps in algorithm). The algorithm proceeds as follows: 2-4 Before iterating, the set of test cases to be returned F inalT estSet is initialised, along with the set of positive and negative test executions ( S+ and S−). 5-6 For each iteration, we remove the set of tests executed so far F inalT estSet from the pool of generated tests T ,and store this as the set P otential .7-8 If this is not the first iteration (i.e. S+ ∪ S− 6 = ∅), a DFA DF A is inferred from the sequences in S+ and S−, using the inference algorithm in Algorith 1. 9 The pool of potential tests P otential is filtered by retaining only those tests that are accepted in the inferred DFA (i.e. belong to L(DF A )). 10-11
{ "page_id": null, "source": 7361, "title": "from dpo" }
A random sub-set of size j is selected from P otential ,is stored as a separate set T ests , and is added to F inalT estSet .12-13 Each of the tests t ∈ T ests is executed. The execute function returns the sub-sequence of elements e in t that are successfully executed. In practice tests are executed by using whatever test execution mechanism is built into the GUI testing framework (e.g. the JFCReplayer in GUItar). 14-17 If t and e are identical, then the sequence e can be added to S+. Otherwise it is added to S−.18 After the i iterations, the final test set F inalT estSet is returned. TABLE I SUBJECT SYSTEMS Name Version LOC Windows Events Rachota 2.3 8,803 10 149 Buddi 3.4.0.8 9,588 11 185 JabRef 2.10b2 52,032 49 680 JEdit 5.1.0 55,006 20 457 DrJava 20130901-r5756 92,813 25 305 C. Implementation The proof of concept tool was implemented as an ap-proximately 1KLOC extension to the MINT EFSM inference tool 1 . Our implementation is tailored to GUItar, but is implemented to be adaptable to alternative GUI testing frame-works. The test-inference loop happens through command-line invocations, and parsing of log-files, there are no API dependencies.n Although we use the EFG-driven compatibility function (see Appendix A), this could in principle be replaced with alternative state machine inference algorithms that do not require the EFG. To obtain the initial large set of potential test sequences from which we are selecting tests, we generate all shortest paths from the EFM using the Floyd-Warshall algorithm . In principle, any test-generation algorithm could be used at this point. It is however desirable that the base set of test cases exercises every event in the GUI, and this level of coverage is guaranteed by the Floyd-Warshall algorithm (accepting, of course, that
{ "page_id": null, "source": 7361, "title": "from dpo" }
many of the proposed tests will invariably not be feasible). IV. P RELIMINARY EVALUATION The goal of our technique is to identify test sets that are ‘efficient’. By skipping tests that are infeasible, it should be possible to spend a greater proportion of the testing effort on meaningful tests that ultimately exercise the behaviour of the underlying program to a greater extent. To investigate whether this is indeed true, we pose the following research questions: RQ1 Does our approach enable longer sequences of GUI interactions? RQ2 Does our approach cover the underlying source code to a greater extent? RQ3 What is the time overhead incurred by our approach? A. Subject Systems To evaluate our approach, we chose five GUI-based Java applications, shown in Table I. We selected these applications on the basis that they were used by Gao et al. for their GUI testing paper. The exact versions (along with accompanying versions of GUItar) were made available by Gao et al. online 2. Rachota is a time-tracking tool that can produce time-management reports. Buddi is a financial budget management tool. JabRef is a bibliography reference manager. JEdit is an extensible text editor, and DrJava is an educational Java IDE. 1 2 ● ● ●● ●● ●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●●●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
{ "page_id": null, "source": 7361, "title": "from dpo" }
●● ●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
{ "page_id": null, "source": 7361, "title": "from dpo" }
● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
{ "page_id": null, "source": 7361, "title": "from dpo" }
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
{ "page_id": null, "source": 7361, "title": "from dpo" }
large pool of tests), we repeat each run 30 times with different random seeds. As our baseline, we randomly pick the same number of test cases from the pool of candidate test cases that collectively cover the vertices of the EFG (as described in Section III-C). This amounts to a generic coverage-driven GUI testing tech-nique. To answer RQ1, we record for each individual test execution the number of separate GUI events successfully executed and the length of each test sequence. To answer RQ2, we record the code coverage, using the Cobertura extensions of the GUItar framework. To answer RQ3 we record the number of milliseconds taken for each iteration. To measure statistical significance in our comparisons of length and coverage, we use the Mann-Whitney U-Test to compare the lengths and coverage respectively at the final (300th) iteration. This statistical test was chosen because a Shapiro-Wilks test indicated that the data is not normally distributed. We report a statistically significant difference if p < 0.05 .The experiments were run in parallel on the ALICE HPC facility at the University of Leicester. GUI interactions were executed with the xvbf virtual frame buffer. To guard against any side-effects from previous tests affecting subsequent tests, a core copy of the program was copied on to the test node for every experiment. The subject systems and test harness were run using Oracle JDK 8. C. Results The mean results after 300 iterations and the p-values for the statistical significance of the U-Tests are shown in Table II. The per-iteration means (and standard deviation error-bars) for sequence-length and coverage are shown in Figure 2. The final times and total number of interactions executed are shown in Figure 3. RQ1: Length of GUI interactions: Figure 2 indicates that, for each system, the inference-based tests achieve longer GUI
{ "page_id": null, "source": 7361, "title": "from dpo" }
interactions than those that are selected at random. When the test runs from all the systems are taken together, the average sequence length achieved from the random selections at the final iteration is 2.41, versus 3.06 for inference-based testing. For all systems apart from JEdit the difference in sequence length is statistically significant. In the case of Buddi the difference is especially pronounced, with the inference-based tests leading to a mean sequence length of 3.17 against a mean of 1.62 for the randomly-selected tests. RQ2: Code coverage: Table II shows that, after 300 iterations, the mean coverage achieved with the help of in-ference is (statistically) significantly higher across all subject systems than that achieved from random test selection. The extent of this improvement, as in RQ1, differs significantly between systems. Figure 2 shows that whereas the difference is substantial for Buddi, Dr Java, and JabRef, it is hard to distinguish visually for Rachota, and only in the latter 50-70 iterations for JEdit. RQ3: Time: Table II shows that the inference-based approach took significantly longer than the random approach for 300 iterations. The average times have to be interpreted with caution because they vary significantly for each system. This is illustrated in Figure 3, which plots the time taken against the number of events executed. On average, the time difference between inference-driven and random test execution is 233.08 minutes. Over 300 iterations (at five test executions each) this amounts to a 42 second difference per iteration. Although the execution time of longer valid test sequences will be a factor, it is likely that the majority of this time is spent inferring the state machine. It should be noted that the 300 iteration cut-off is an arbitrary limit. Looking at the sequence-length and code-coverage time-series in Figure 2, significant improvements over random
{ "page_id": null, "source": 7361, "title": "from dpo" }
testing are already evident between 100 and 150 iterations, in which case the time-overhead would be significantly lower. Summary The findings are promising. The inclusion of inference supported by negative results leads to longer sequences, which probe aspects of software behaviour that are not reached by random executions. As a result this leads to higher code-coverage. There is a time-overhead involved, largely because of the need to run a Machine Learner at every iteration. D. Threats to Validity a) External threats to validity: The baseline used in our experiments is a test set generated by coverage-guided GUI Ripping. Although we have convincingly shown that the use of ●● ●●●●●●●●● ●● ●●●● ●● ●●● ●●● ●● ●● > ●●●●●● ●●●●●●●● ● ●●●●●●●●●●● > ●●●●●●●●●●●●●●●●●●● ● ●●●●●●● ●●● > ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● Buddi Dr Java JabRef JEdit Rachota 100 200 300 400 100 200 300 400 500 0 300 600 900 1200 400 800 1200 1600 0 100 200 300 400 500 0250 500 750 1000 1250 Minutes > Inputs Executed Types ● Inference−based Random Fig. 3. Times taken for 300 iterations, versus total number of inputs executed. inference (with negative examples) produces better results, this does not demonstrate that the use of negative examples pro-duces better results than conventional (non-negative example)-based inference approachces such as SwiftHand or FSMDroid. This will require a separate controlled experiment, and is part of our plans for future work. The experiments are based on five Java (Swing / AWT) programs, and cannot be taken to represent, for example, the performance obtained with respect to mobile devices. However, as far as desktop GUI applications are concerned, they are all diverse in terms of their domain and size, and have all been used in previous studies on GUI testing. b) Internal threats to validity: We use statement cov-erage to
{ "page_id": null, "source": 7361, "title": "from dpo" }
gauge the extent to which the behaviour of the underlying source code has been executed. This is notoriously imprecise at estimating test adequacy; test set can achieve a high level of statement coverage but still miss out on many aspects of program behaviour. Since we are more interested in using code coverage as a relative measure as opposed to an absolute one, we would nevertheless argue that it is reasonable to presume that a test set that achieves statement coverage that is higher than another test set is exercising a greater range of behaviour. V. C ONCLUSIONS AND FUTURE WORK In this paper we have introduced a technique whereby state machine learners can be incorporated into an automated testing cycle to increase the likelihood of selecting valid, longer test sequences. We have demonstrated a proof-of-concept implementation, and have successfully applied it to a selection of Java Swing / AWT programs. Our work has specifically considered the ”GUI-ripping” setting, but can be adapted to other settings. model inference has been used successfully in ”active” Android testing settings , , for example. These also offer sources of negative information that can be easily used to improve and refine models, and the test sets that are derived from them as a result. REFERENCES A. M. Memon, “An event-flow model of gui-based applications for testing,” Software testing, verification and reliability , vol. 17, no. 3, pp. 137–157, 2007. K. Mao, M. Harman, and Y. Jia, “Sapienz: Multi-objective automated testing for android applications,” in Proceedings of the 25th International Symposium on Software Testing and Analysis . ACM, 2016, pp. 94–105. W. Choi, G. Necula, and K. Sen, “Guided gui testing of android apps with minimal restart and approximate learning,” in Acm Sigplan Notices ,vol. 48, no. 10. ACM, 2013, pp. 623–640.
{ "page_id": null, "source": 7361, "title": "from dpo" }
T. Su, G. Meng, Y. Chen, K. Wu, W. Yang, Y. Yao, G. Pu, Y. Liu, and Z. Su, “Guided, stochastic model-based gui testing of android apps,” in Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering . ACM, 2017, pp. 245–256. K. J. Lang, B. A. Pearlmutter, and R. A. Price, “Results of the Abbadingo One DFA learning competition and a new evidence-driven state merging algorithm,” in Proceedings of the 4th International Colloquium on Grammatical Inference , V. Honavar and G. Slutzki, Eds., vol. 1433. Springer-Verlag, 1998, pp. 1–12. A. Memon, I. Banerjee, and A. Nagarajan, “What test oracle should i use for effective gui testing?” in 18th IEEE International Conference on Automated Software Engineering, 2003. Proceedings. IEEE, 2003, pp. 164–173. D. Amalfitano, A. R. Fasolino, P. Tramontana, B. D. Ta, and A. M. Memon, “Mobiguitar: Automated model-based testing of mobile apps,” IEEE software , vol. 32, no. 5, pp. 53–59, 2014. Z. Gao, Y. Liang, M. B. Cohen, A. M. Memon, and Z. Wang, “Making system user interactive tests repeatable: When and what should we control?” in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on , vol. 1. IEEE, 2015, pp. 55–65. S. Huang, M. B. Cohen, and A. M. Memon, “Repairing gui test suites using a genetic algorithm,” in Software Testing, Verification and Validation (ICST), 2010 Third International Conference on . IEEE, 2010, pp. 245–254. S. R. Choudhary, A. Gorla, and A. Orso, “Automated test input gen-eration for android: Are we there yet?(e),” in 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE) .IEEE, 2015, pp. 429–440. S. Hao, B. Liu, S. Nath, W. G. Halfond, and R. Govindan, “Puma: programmable ui-automation for large-scale dynamic analysis of mobile apps,” in Proceedings of the 12th
{ "page_id": null, "source": 7361, "title": "from dpo" }
annual international conference on Mobile systems, applications, and services . ACM, 2014, pp. 204–217. L. Mariani, M. Pezze, O. Riganelli, and M. Santoro, “Autoblacktest: Automatic black-box testing of interactive applications,” in 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation . IEEE, 2012, pp. 81–90. T. E. Vos, P. M. Kruse, N. Condori-Fern´ andez, S. Bauersfeld, and J. Wegener, “Testar: Tool support for test automation at the user interface level,” International Journal of Information System Modeling and Design (IJISMD) , vol. 6, no. 3, pp. 46–83, 2015. A. I. Esparcia-Alc´ azar, F. Almenar, T. E. Vos, and U. Rueda, “Using genetic programming to evolve action selection rules in traversal-based automated software testing: results obtained with the testar tool,” Memetic Computing , vol. 10, no. 3, pp. 257–265, 2018. J. Hopcroft, R. Motwani, and J. Ullman, Introduction to Automata Theory, Languages, and Computation, Third Edition . Addison-Wesley, 2007. D. Angluin, “On the complexity of minimum inference of regular sets,” Information and Control , vol. 39, pp. 337–350, 1978. L. Valiant, “A theory of the learnable,” Communications of the ACM ,vol. 27, no. 11, pp. 1134–1142, 1984. D. Angluin, “Learning Regular Sets from Queries and Counterexam-ples,” Information and Computation , vol. 75, pp. 87–106, 1987. J. Oncina and P. Garcia, “Inferring regular languages in polynomial updated time,” in Pattern recognition and image analysis: selected papers from the IVth Spanish Symposium . World Scientific, 1992, pp. 49–61. G. Ammons, R. Bod´ ık, and J. R. Larus, “Mining specifications,” in POPL 2002 , Portland, Oregon, Jan. 16–18, 2002, pp. 4–16. A. W. Biermann and J. A. Feldman, “On the synthesis of finite-state machines from samples of their behaviour,” IEEE Transactions on Computers , vol. C, no. 21, pp. 592–597, 1972. A.
{ "page_id": null, "source": 7361, "title": "from dpo" }
W. Biermann and R. Krishnaswamy, “Constructing programs from example computations,” IEEE Transactions on Software Engineering ,no. 3, pp. 141–153, 1976. J. Cook and A. Wolf, “Discovering models of software processes from event-based data,” ACM Transactions on Software Engineering and Methodology , vol. 7, no. 3, pp. 215–249, Jul. 1998. C. Damas, B. Lambeau, P. Dupont, and A. van Lamsweerde, “Generating annotated behavior models from end-user scenarios,” IEEE Transactions on Software Engineering , vol. 31, no. 12, 2005. D. Lo, H. Cheng, J. Han, S.-C. Khoo, and C. Sun, “Classification of software behaviors for failure detection: a discriminative pattern mining approach,” in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining . ACM, 2009, pp. 557–566. D. Lo and S. Maoz, “Scenario-based and value-based specification mining: better together,” Automated Software Engineering , vol. 19, no. 4, pp. 423–458, 2012. D. Lorenzoli, L. Mariani, and M. Pezz` e, “Automatic generation of soft-ware behavioral models,” in ACM/IEEE 30th International Conference on Software Engineering, 2008. (ICSE’08) . ACM, 2008, pp. 501–510. S. Reiss and M. Renieris, “Encoding program executions,” in ICSE .IEEE Computer Society, 2001, pp. 221–230. N. Walkinshaw, K. Bogdanov, M. Holcombe, and S. Salahuddin, “Re-verse engineering state machines by interactive grammar inference,” in Reverse Engineering, 2007. WCRE 2007. 14th Working Conference on .IEEE, 2007, pp. 209–218. N. Walkinshaw, B. Lambeau, C. Damas, K. Bogdanov, and P. Dupont, “STAMINA: a competition to encourage the development and assess-ment of software model inference techniques,” Empirical Software Engineering , pp. 1–34, 2012. N. Walkinshaw, K. Bogdanov, J. Derrick, and J. Paris, “Increasing functional coverage by inductive testing: A case study,” in Testing Software and Systems (ICTSS’10) , 2010, pp. 126–141. R. K. Shehady and D. P. Siewiorek, “A method to automate
{ "page_id": null, "source": 7361, "title": "from dpo" }
user interface testing using variable finite state machines,” in Fault-Tolerant Computing, 1997. FTCS-27. Digest of Papers., Twenty-Seventh Annual International Symposium on . IEEE, 1997, pp. 80–88. R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction .MIT press Cambridge, 1998, vol. 1, no. 1. G. Bae, G. Rothermel, and D.-H. Bae, “Comparing model-based and dynamic event-extraction based gui testing techniques: An empirical study,” Journal of Systems and Software , vol. 97, pp. 15–46, 2014. J. Magee and J. Kramer, Concurrency: State Models and Java Programs .Wiley, 1999. N. Walkinshaw, R. Taylor, and J. Derrick, “Inferring extended finite state machine models from software executions,” Empirical Software Engineering , vol. 21, no. 3, pp. 811–853, 2016. R. W. Floyd, “Algorithm 97: Shortest path,” Communications of the ACM , vol. 5, p. 345, 1962. APPENDIX AUSING THE EFG FOR STATE MACHINE INFERENCE It is in our interest that the inferred DF A is as precise as possible. It should generalise upon the set of traces in S+,but not over-generalise to the point that it accepts too many sequences that are infeasible. Existing state machine inference-based GUI-testing approaches such as SwiftHand take advantage of the run-time GUI state. When deciding whether a pair of states can be merged (i.e. as part of the ChoosePair function in Algorithm 1), they take advantage of the ability to compare which events are possible at any given point; if different events are possible, the states are not behaviourally equivalent and should not be merged. In our setting, we do not presume access to the run-time state. However, if we have access to the EFG, it is possible increase the efficiency and accuracy of this process by in a similar manner to the use of the live test-information used by tools such
{ "page_id": null, "source": 7361, "title": "from dpo" }
as SwiftHand. Algorithm 3: EFG-supported compatibility function Input : A DFA D and an EFG E. Result : A boolean. > 1 ChoosePair(E, D) begin > 2 merge ← f alse ; > 3 while (Bi, B j ) ← ChoosePair (QD ) ∧ ¬ merge do > 4 Events ← in (D, B i)∪in (D, B j ); > 5 Dest D ←out (D, B i)∪out (D, B j ); > 6 Dest E ← ∅ ; > 7 for e ∈ Events do > 8 Dest E ← Dest E ∪ out (E, e ); > 9 if Dest D ⊆ Dest E then > 10 merge ← true ; > 11 return (Bi, B j ); Algorithm 3 shows a version of the ChoosePair function that can be used as a wrapper for the original ChoosePair function in Algorithm 1. For every pair of states considered (line 3), it identifies the set of GUI events Events (vertices in the EFG) that would need to be considered by identifying the set of incoming events to each candidate state in the DFA (denoted by function in , line 4). It then predicts the set of all events that should be possible from the merged state in the DFA by taking the union of the outgoing transition events from both candidate states (line 5). It constructs a corresponding union of events are possible according to the EFG by taking the union of events possible after every event e ∈ Events (lines 7-8). If the set of events possible from the merged state is a subset of the set of events in the EFG (line 9), then the merge is allowed (line 10), otherwise the process is repeated for some alternative pair.
{ "page_id": null, "source": 7361, "title": "from dpo" }
Title: OWR-2019-46 URL Source: Markdown Content: ## Mathematisches Forschungsinstitut Oberwolfach Report No. 46/2019 DOI: 10.4171/OWR/2019/46 ## Mini-Workshop: Degeneration Techniques in Representation Theory Organized by Evgeny Feigin, Moscow Ghislain Fourier, Aachen Martina Lanini, Rome 6 October – 12 October 2019 Abstract. Modern Representation Theory has numerous applications in many mathematical areas such as algebraic geometry, combinatorics, con-vex geometry, mathematical physics, probability. Many of the object and problems of interest show up in a family. Degeneration techniques allow to study the properties of the whole family instead of concentrating on a single member. This idea has many incarnations in modern mathematics, including Newton-Okounkov bodies, tropical geometry, PBW degenerations, Hessenberg varieties. During the mini-workshop Degeneration Techniques in Representation Theory various sides of the existing applications of the degen-erations techniques were discussed and several new possible directions were reported. Mathematics Subject Classification (2010): 17B10, 17B35, 20G05, 14M15, 14M25, 52B20, 05E05, 05E10. Introduction by the Organizers The mini-workshop Degeneration Techniques in Representation Theory , organ-ised by Evgeny Feigin (Moscow), Ghislain Fourier (Aachen) and Martina Lanini (Rome) took place on October 6th – October 12, 2019. It was attended by 17 participants from Italy, Germany, Russia, USA, Canada, Japan, Mexico. The mini-workshop consisted of three mini-courses (three lectures each course), several one hour talks and several short presentations by young researchers. Special time slots were reserved for free discussions and problem sessions to facilitate the interactions between participants. All the participants were given a possibility to present their research. This was especially important for the young participants 2870 Oberwolfach Report 46/2019 since they could get a feedback from the more mature mathematicians. We focused on two main goals: to discuss the results obtained by the participants of the mini-workshop within the last year and to lay down the foundation for the directions of the future
{ "page_id": null, "source": 7361, "title": "from dpo" }
research. The mini-courses were given by Chris Manon (USA), Syu Kato (Japan) and Julianna Tymoczko (USA). The talks of Chris Manon were devoted to the description of various aspects of the Khovanskii bases with applications in commutative algebra, algebraic geom-etry and the theory of Newton-Okounkov bodies. The lectures of Chris Manon were very important for the participants of the workshop since the language of valuations and SAGBI, Gr¨ obner and Khovanskii bases provides a natural link between various degenerations of flag varieties, coming from the PBW-type fil-trations, tropical geometry and quiver Grassmannians. This was illustrated by the talks of Valentina Kiritchenko (Russia), Peter Littelmann (Germany), Naoki Fujita (Japan), Lara Bossinger (Mexico) and Igor’ Makhlin (Russia) treating var-ious aspects of the degenerations of flag varieties and Bott-Samelson varieties. In particular, several types of toric degenerations were discussed and combinatorics of convex polytopes was touched upon. The mini-course given by Julianna Tymoczko was devoted to the Hessenberg varieties. These are projective algebraic naturally defined as subvarieties of the flag varieties of simple finite-dimensional complex Lie groups. The geometry, rep-resentation theory and combinatorics of the Hessenberg varieties were discussed, including the Goresky-Kottwitz-MacPherson approach for the computation of the equivariant cohomology groups. The degeneration techniques play a crucial role in the story, since the Hessenberg varieties depend on an operator and it is absolutely necessary to consider these varieties in families. In particular, the famous Peterson varieties can be studied as limits of regular semi-simple Hessenberg varieties. The lectures of Julianna Tymoczko were complemented by the talk of Martha Precup (USA), who described some particular results in the theory of Hessenberg varieties and their connection to the famous Stanley-Stembridge Conjecture. The lectures given by Syu Kato (Japan) were devoted to the theory of semi-infinite flag varieties. These are infinite-dimensional ind-schemes, which are crucial for
{ "page_id": null, "source": 7361, "title": "from dpo" }
the study of various aspects of the moduli spaces of maps to the flag varieties of simple finite-dimensional Lie groups. In particular, they carry information about degenerations of the maps into the so-called quasi-maps. In his talks Syu Kato gave an overview of the modern state-of-art in the understanding of algebro-geometric, representation-theoretic and combinatorial properties of the semi-infinite flag va-rieties. He also stated several very recent results and decribed open directions. The talk of Ilya Dumanski can be regarded as a companion of the Kato lectures. More precisely, Ilya Dumanski considered the semi-infinite version of the Verones curves and, generally, the Veronese embeddings of the flag varieties. The relation to the theory of affine Demazure modules was described and several conjectures were stated. Mini-Workshop: Degeneration Techniques in Representation Theory 2871 One more central topic of the mini-workshop was the theory of quiver Grassmanni-ans. By definition, these varieties depend on a representation of a quiver and thus show up in a family. Degenerating a representation one gets a family of projec-tive algebraic variety. Degenerations of representations of bipartite type A quivers were the focus of Jenna Rajchgot’s talk, who explained how to make use of these degenerations to determine K-theory of the relevant quiver loci. Since classical flag varieties of type A can be realized as quiver Grassmannians for equi-oriented type A quiver, one gets a very natural example of such a degeneration procedure. Various PBW-type degenerations and Schubert degenerations can be obtained in this way. The talks of Xin Fang (Germany), Markus Reineke (Germany) and Johannes Flake (Germany) were devoted to various aspects of quiver Grassmanni-ans. In particular, a sum of squares formula for the dimension of the cohomology algebra of PBW degenerate was presented and a conjecture about the cohomology algebra was formulated. Also, certain numerical results on the
{ "page_id": null, "source": 7361, "title": "from dpo" }
properties of the maximal flat degeneration were presented. Summarizing, the workshop was very successful: not only we were able to dis-cuss various new results obtained by several groups working in different countries, but we also paved the way for the future research, formulating open problems and discussing new directions. Acknowledgement: The MFO and the workshop organizers would like to thank the National Science Foundation for supporting the participation of junior researchers in the workshop by the grant DMS-1641185, “US Junior Oberwolfach Fellows”. Mini-Workshop: Degeneration Techniques in Representation Theory 2873 Mini-Workshop: Degeneration Techniques in Representation Theory Table of Contents Xin Fang Degenerations of flag varieties from PBW filtration . . . . . . . . . . . . . . . . . . 2875 Syu Kato On the definition of semi-infinite flag manifolds and applications . . . . . . 2877 Julianna Tymoczko An introduction to Hessenberg varieties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2880 Christopher Manon Khovanskii bases in three settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2883 Markus Reineke Some open problems on degerate flag varieties . . . . . . . . . . . . . . . . . . . . . . . 2887 Jenna Rajchgot (joint with Ryan Kinser, Allen Knutson) Types A and D quiver representation varieties . . . . . . . . . . . . . . . . . . . . . . 2889 Valentina Kiritchenko Convex geometric push-pull operators and
{ "page_id": null, "source": 7361, "title": "from dpo" }
FFLV polytopes . . . . . . . . . . . . 2893 Martha Precup (joint with Megumi Harada) Hessenberg Varieties and the Stanley–Stembridge Conjecture . . . . . . . . . . 2896 Lara Bossinger Positive initial ideals and the FFLV polytope . . . . . . . . . . . . . . . . . . . . . . . . 2899 Ilya Dumanski Global Demazure modules and semi-infinite Veronese curves . . . . . . . . . . 2900 Naoki Fujita (joint with Hironori Oya) Newton-Okounkov bodies of Schubert varieties and tropicalized cluster mutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2902 Igor Makhlin Non-abelian PBW degenerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2904 Peter Littelmann (joint with Rocco Chiriv` ı, Xin Fang) Flag varieties, Valuations and Standard Monomial Theory . . . . . . . . . . . . 2905 Mini-Workshop: Degeneration Techniques in Representation Theory 2875 ## Abstracts Degenerations of flag varieties from PBW filtration Xin Fang In this extended abstract I present a short summary on results around FFLV (Feigin-Fourier-Littelmann-Vinberg) bases for Lie algebras of type A and degener-ations of flag varieties from these bases. It is by no means complete. Let g = sl n(C) be the Lie algebra of traceless n × n-matrices and
{ "page_id": null, "source": 7361, "title": "from dpo" }
g = n ⊕ h ⊕ n− be a triangular decomposition. For a positive root β ∈ ∆+, fix 0 6 = fβ ∈ n− to be a root vector of weight −β. By assigning deg( fβ ) = 1 we obtain the classical PBW (Poincar´ e-Birkhoff-Witt) filtration on the universal enveloping algebra U (n−) whose associated graded algebra is isomorphic to U (n−,a ) where n−,a is the abelianization of n− by reducing all brackets to zero. For a dominant integral weight λ ∈ P +, the corresponding irreducible representation V (λ) of g is U (n−)-cyclic. By fixing a highest weight vector vλ ∈ V (λ), the PBW filtration on U (n−) induces a filtration on V (λ). Passing to the associated graded module gives a U (n−,a )-module V a(λ). Let vaλ denote the class of vλ therein. Ten years ago, in Feigin, Fourier and Littelmann found a basis of V a(λ)together with a nice parametrisation: Theorem 1. (1) For λ ∈ P +, there exists an explicit lattice polytope FFLV( λ) in R∆+ > ≥0 such that {f a ·vaλ | a ∈ FFLV( λ)Z} form a C-basis of V a(λ), where FFLV( λ)Z := FFLV( λ) ∩ Z∆+ and for a tuple a = ( aβ )β∈∆+ ∈ FFLV( λ)Z, f a := ∏ > β∈∆+ f aβ > β ∈ U (n−,a ). (2) For λ, μ ∈ P +, FFLV( λ + μ)Z = FFLV( λ)Z + FFLV( μ)Z as the Minkowski sum of sets. According to , such polytopes can be looked as the marked chain polytopes associated to the root poset of g; the marked order counterparts are the famous Gelfand-Tsetlin polytopes. There exists a piecewise-linear and lattice preserving transfer map between them, details can be found in loc.cit .Since
{ "page_id": null, "source": 7361, "title": "from dpo" }
n−,a acts nilpotently on V a(λ), the abelianized group N −,a := exp( n−,a )acts on V a(λ). Feigin defined the degenerate flag variety Fan := N −,a · [vaρ ] ⊆ P(V a(ρ)) where ρ is the sum of fundamental weights. Recall that the classical type A flag variety Fn admits various realizations: (1) highest weight orbit: exp( n−) · [vρ] ⊆ P(V (ρ)); (2) linear subspaces: {(V1, · · · , V n−1) | Vi ⊆ Vi+1 } ⊆ ∏n−1 > k=1 Gr k(Cn); (3) vanishing locus of Pl¨ ucker relations RkI,J in the projectivization of the sum of fundamental representations. A C-basis of the homogeneous coordinate ring C[Fn] is encoded in the semi-standard Young tableaux. In [9, 10], Feigin showed that Fan similar descriptions exist for Fan :2876 Oberwolfach Report 46/2019 (2a) linear subspaces: {(V1, · · · , V n−1) | pr i+1 (Vi) ⊆ Vi+1 } ⊆ ∏n−1 > k=1 Gr k(Cn), where for a fixed basis e1, · · · , e n of Cn, pr k : Cn → Cn is the linear projection along ek;(3a) Fan is the vanishing locus of degenerate Pl¨ ucker relations Rk;aI,J .A C-basis of C[Fan ] is encoded in the semi-standard PBW tableaux arising from the FFLV basis. This implies that Fan is a flat degeneration of Fn and is reduced. A striking result in establishes an isomorphism of projective varieties between Fan and a Schubert variety Xw (˜λ) where w ∈ S2n and ˜λ is a weight of sl 2n(C). Later, the authors in gave a representation-theoretical proof of this result by showing that the abelianized representation V a(λ) is isomorphic to a Demazure module for the double rank Lie algebra. The description (2 a) is the starting point of using the
{ "page_id": null, "source": 7361, "title": "from dpo" }
quiver Grassmannian approach. Such a description is further generalized in [2, 3] where we classify the linear maps f := ( f1, · · · , f n−2) ∈ End( Cn)n−2 such that the scheme Ff > n := {(V1, · · · , V n−1) | fi(Vi) ⊆ Vi+1 } ⊆ > n−1 ∏ > k=1 Gr k(Cn)enjoys nice geometric properties (normal, irreducible, being a flat degeneration, etc ). There exists a subset of End( Cn)n−2 called the PBW locus. For f = ( f1, · · · ,fn−2) coming from the PBW locus, Ff > n can be realized as a highest weight orbit of some partial degenerations of n−; it can be scheme-theoretically cut off by degenerations of Pl¨ ucker relations and a basis of C[Ff > n ] is encoded in the semi-standard PBW tableaux. They are isomorphic to Schubert varieties in some partial flag varieties. A toric degeneration of Fn to the toric variety associated to FFLV polytopes is first constructed in in an algebraic setting and later in in a geometric point of view using Newton-Okounkov bodies. From a tropical geometric point of view, such a toric degeneration corresponds to a maximal prime cone in the tropical flag variety with respect to the Pl¨ ucker embedding. Such a cone is explicitly described in by giving all its non-redundant facets. Relations between this cone and quantum groups are discovered in the forthcoming preprint . Many of the results above hold for sp 2n(C), but the entire picture is far away from being complete. References > F. Ardila, T. Bliem and D. Salazar, Gelfand-Tsetlin polytopes and Feigin-Fourier-Littelmann-Vinberg polytopes as marked poset polytopes. J. Combin. Theory Ser. A 118 (2011), no. 8, 2454–2462. G. Cerulli Irelli, X. Fang, E. Feigin, G. Fourier
{ "page_id": null, "source": 7361, "title": "from dpo" }
and M. Reineke. Linear degenerations of flag varieties . Math. Z., 287 (2017), no. 1–2, 615–654. G. Cerulli Irelli, X. Fang, E. Feigin, G. Fourier and M. Reineke, Linear degenerations of flag varieties: partial flags, defining equations, and group actions , to appear in Math. Z. G. Cerulli Irelli, E. Feigin and M. Reineke. Quiver Grassmannians and degenerate flag varieties . Algebra and Number Theory 6(2012), no. 1, 165–194. Mini-Workshop: Degeneration Techniques in Representation Theory 2877 > G. Cerulli Irelli, M. Lanini. Degenerate flag varieties of type Aand Care Schubert varieties .Internat. Math. Res. Notices, Volume 2015, Issue 15, 1 January 2015, Pages 6353–6374. G. Cerulli Irelli, M. Lanini and P. Littelmann. Degenerate flag varieties and Schubert vari-eties: a characteristic free approach , Pac. J. Math. 284-2 (2016), 283–308. X. Fang, E. Feigin, G. Fourier and I. Makhlin, Weighted PBW degenerations and tropical flag varieties , Communications in Contemporary Mathematics, Vol. 21, No. 01, 1850016 (2019). X. Fang, G. Fourier and M. Reineke. From quantum groups to tropical flag varieties , preprint in preparation. E. Feigin, GMadegeneration of flag varieties . Selecta Math. (N.S.), 18 (2012), no. 3, 513–537. E. Feigin. Degenerate flag varieties and the median Genocchi numbers . Math. Res. Lett. 18, (2011), no. 6, 1163–1178. E. Feigin, G. Fourier, P. Littelmann, PBW filtration and bases for irreducible modules in type An, Transform. Groups 16 (2011), no. 1, 71–89. E. Feigin, G. Fourier, and P. Littelmann, Favourable modules: Filtrations, polytopes, Newton-Okounkov bodies and flat degenerations , Transformation Groups, June 2017, Vol-ume 22, Issue 2, pp 321–352. V. Kiritchenko, Newton-Okounkov polytopes of flag varieties , Transformation Groups, June 2017, Volume 22, Issue 2, pp 387–402. On the definition of semi-infinite flag manifolds and applications Syu Kato Geometry of
{ "page_id": null, "source": 7361, "title": "from dpo" }
flag manifolds reflects the representation theoretic pattern of a (sim-ply connected) semi-simple algebraic group G and its Lie algebra g. It is already apparent in the Borel-Weil theorem, that states the nef line bundle of the full flag variety B of G is in bijection with the set of isomorphism classes of the irreducible rational representations of G. The Beilinson-Bernstein localization theorem and the Bezrukavnikov-Mirkovi´ c-Ryuminin derived localization theorem amplified the Borel-Weil theorem by incorporating all the g-modules. These discoveries encourage people to develop parallel description for affine Lie algebras and p-adic groups based on the geometry of affine flag manifolds. In such a development, it becomes apparent that there are three affine flag manifolds that are relevant to the representation theory of affine Lie algebras. One of them, usually referred to as the semi-infinite flag manifolds, is defined set-theoretically as: X ∞ > 2 := G(C(( z))) /H (C[[ z]]) N (C(( z))) , where B ⊂ G is a Borel subgroup, N ⊂ G is its unipotent radical, and H ⊂ G is a maximal torus of B. The geometry and combinatorics of X ∞ > 2 are expected to represent the representation theory of G at the positive characteristic and repre-sentation theory of the affine Lie algebra of g at the critical level [1, 2]. However, it turned out to be impossible to put X ∞ > 2 a separated scheme structure. Taking into account of this, the geometric study of a modified version of semi-infinite flag manifold QG := G(C[z±1]) /H (C)N (C[z±1]) 2878 Oberwolfach Report 46/2019 was initiated in Finkelberg-Mirkovi´ c and Feigin-Finkelberg-Kuznetsov-Mirkovi´ c. One of the important observation afforded there is that QG is the union of the quasi-map spaces QG(β), that is a compactification of the space of maps from P1 to
{ "page_id": null, "source": 7361, "title": "from dpo" }
B of degree β, where β is an element of the non-negative span of the positive coroots of G, identified with the effective classes in H2(B, Z). Since QG(β) admits a resolution of singularities by a variant of the space of stable maps, this opened a possibility to connect the theory of semi-infinite flag manifolds with the theory of quantum cohomologies and quantum K-groups of B.In , they also consider another version Qrat > G := G(C(( z))) /H (C)N (C(( z))) , that we call the formal version of the semi-infinite flag manifold. It is clear that QG ⊂ Qrat > G by set-theoretic consideration, and one can show that this inclusion must be Zariski dense. However, the technical diffusion between them is rather heavy as QG has countable dimension while Qrat > G has uncountable dimension. In particular, exhibits that QG exists as a scheme, but it is not suited enough for an intensive algebro-geometric study except for G = SL (2 , C). In this series of talks, we first exhibited how to capture Qrat > G and QG(β) more concretely in the sense one can actually characterize their homogeneous coordi-nate rings. More precisely, we employ the theory of extremal weight modules of quantum loop algebra associated to G to produce an ind-scheme (of infinite type) that is reduced, normal, and is the best approximation of Qrat > G by an ind-scheme. In particular, we exhibited that it is reasonable to define the ind-piece QG of Qrat > G as QG = mProj ⊕ > λ∈P+ W(λ)∨, where mProj is the multigraded proj, P+ is the set of dominant weights of G, W(λ)is the global Weyl module of current algebra g[z] = g ⊗C C[z] of extremal weight λ,and ∨ denotes the restricted dual. In this
{ "page_id": null, "source": 7361, "title": "from dpo" }
picture, we recover QG(β) as a partic-ular case of the Richardson variety of Qrat > G . This makes it possible to reasonably understand the coordinate rings of Qrat > G and QG(β) from representation-theoretic perspective, and also exhibits the higher cohomology vanishing of their natural nef line bundles. The proofs of these results require the Frobenius splitting of the positive characteristic analogue of them (), whose proof is quite untraditional and uses the Frobenius splitting of the thick flag manifolds exhibited in that in turn employs an argument from originally due to Olivier Mathieu. The second topic was a definition of the equivariant K-group of Qrat > G . Unfortu-nately, we do not know whether the scheme QG is coherent or not. In particular, a naive definition of equivariant K-group of Qrat > G using a class of finitely gener-ated OQrat > G -modules might break down heavily. To avoid this difficulty, we employ a view-point that the ring ⊕ > λ∈P+ W(λ)∨ is far from Noetherian, but is “graded Artin”. This view-point enables us to define the equivariant K-group of Qrat > G as a “convergent” functional on P modulo the constraints/relations coming from disguises of finite generation/negligible modules. Once we have a definition of Mini-Workshop: Degeneration Techniques in Representation Theory 2879 KH×C× (QG) or KH×C× (Qrat > G ), we can prove a Pieri-Chevalley formula in this set-ting () by means of the analysis of the internal structure of W(λ)’s. We can also interpret such coefficients in terms of Richardson varieties of Qrat > G .The third topic was the connection of KH×C× (QG) with the equivariant quan-tum K-group qK H (B) of B in the sense of Givental and Lee. In fact, Braver-man connects QG(β) with quantum cohomology of B though the J-functions,
{ "page_id": null, "source": 7361, "title": "from dpo" }
that was further polished to the K-theoretic J-function calculation in Braverman-Finkelberg [6, 7]. There, the main geometric portion was to (essentially) guarantee that QG(β) is normal and has rational singularities. Thanks to the reconstruction theorem, the knowledge of the J-fuction is enough to recover the ring structure of the quantum cohomologies/ K-groups in a sense. However, roughly speaking, this is a kind of statement that a commutative local Artin ring is specified as the annihilator ring of a polynomial, and it is desirable to make things more explicit. This is achieved by specifying an isomorphism qK H (B) ∼= −→ KH (QG)as based rings (). The proof of this isomorphism itself only requires the Borel-Weil-Bott theorem of QG, but the proof of the preservation of the bases are more delicate. It requires to extend the above mentioned results of Braverman-Finkelberg to some particular Schubert-like subvarieties of QG(β). References G. Lusztig, Hecke algebras and Jantzen’s generic decomposition patterns , Adv. Math. 37 (1980), 121–164. B. Feigin and E. Frenkel. Affine Kac-Moody algebras and semi-infinite flag manifold . Comm. Math. Phys., 128 , 161–189, 1990. M. Finkelberg and Ivan Mirkovi´ c. Semi-infinite flags. I. Case of global curve P1. In: Dif-ferential topology, infinite-dimensional Lie algebras, and applications , volume 194 of Amer. Math. Soc. Transl. Ser. 2, pages 81–112. Amer. Math. Soc., Providence, RI, 1999. B. Feigin, M. Finkelberg, A. Kuznetsov, and I. Mirkovi´ c. Semi-infinite flags. II. Local and global intersection cohomology of quasimaps’ spaces. In: Differential topology, infinite-dimensional Lie algebras, and applications , volume 194 of Amer. Math. Soc. Transl. Ser. 2, pages 113–148. Amer. Math. Soc., Providence, RI, 1999. A. Braverman. Instanton counting via affine Lie algebras. I. Equivariant J-functions of (affine) flag manifolds and Whittaker vectors. In: Algebraic Structures and Moduli Spaces: CRM
{ "page_id": null, "source": 7361, "title": "from dpo" }
Workshop, July 14-20, 2003, Montr´ eal, Canada , volume 38 of CRM Proc. Lecture Notes, pages 113–132. Amer. Math. Soc., Providence, RI, 2004. A. Braverman and M. Finkelberg. Semi-infinite Schubert varieties and quantum K-theory of flag manifolds , J. Amer. Math. Soc., 27 no. 4 1147–1168, 2014. A. Braverman and M. Finkelberg. Twisted zastava and q-Whittaker functions . J. London Math. Soc., 96 no. 2 309–325, 2017. S. Kumar, K. Schwade, Richardson varieties have Kawamata log terminal singularities ,IMRN no.3 (2014), 842–864 S. Kato, Frobenius splitting of thick flag manifolds of Kac-Moody algebras , IMRN, to appear. S. Kato, S. Naito, and D. Sagaki. Equivariant K-theory of semi-infinite flag manifolds and Pieri-Chevalley formula, arXiv:1702.02408 2880 Oberwolfach Report 46/2019 > S. Kato, Loop structure on equivariant K-theory of semi-infinite flag manifolds, arXiv:1805.01718 S. Kato, Frobenius splitting of Schubert varieties of semi-infinite flag manifolds, arXiv:1810.07106 An introduction to Hessenberg varieties Julianna Tymoczko Hessenberg varieties form a large family of subvarieties of the flag variety that arise naturally in many areas of mathematics, including quantum cohomology, numerical analysis, algebraic geometry, representation theory, and combinatorics. 1 Hessenberg varieties can be defined in general Lie type, but we mainly consider type An. A flag is a set of nested subspaces V1 ⊆ V2 ⊆ · · · ⊆ Vn−1 ⊆ Cn with each Vi an i-dimensional C-vector space. The collection of flags is the flag variety GL n(C)/B where B consists of upper-triangular invertible matrices; the coset gB generates the flag V• whose i-dimensional subspace Vi is the span of the first i columns of g.The first parameter used to define Hessenberg varieties is a linear operator X : Cn → Cn. (In fact, only the generalized Jordan type of X is needed .) The second parameter has
{ "page_id": null, "source": 7361, "title": "from dpo" }
several equivalent descriptions, including:. • A function h : {1, 2, . . . , n } → { 1, 2, . . . , n } with h(i) ≥ h(i − 1) for all i (nondecreasing) and h(i) ≥ i for all i (upper-triangular). • A subspace H of gl n with [ H, b] ⊆ H and H ⊃ b. • A subset MH of roots so that if α ∈ M H , β ∈ Φ+, α +β ∈ Φ then α ∈ M H (nondecreasing) and MH ⊃ Φ+ (upper-triangular). Hessenberg varieties were first defined by De Mari-Shayman and then general-ized by De Mari-Procesi-Shayman [6, 7]. Definition 1. Given X and h as above, the Hessenberg variety Hess (X, h ) is Hess (X, h ) = { Flags V• : XV i ⊆ Vh(i) for all i} = { Flags gB : g−1Xg ∈ H} Geometry of Hessenberg varieties. Hessenberg varieties have some properties analogous to the Schubert cell decomposition of the full flag variety. • They are paved by affines. This is a condition like a CW-decomposition except that closures of cells need not be a union of other cells . • If the matrix X is chosen well, the affine pieces are intersections with Schubert cells, called Hessenberg Schubert cells . This holds in type A and in some other cases . • The Hessenberg Schubert cells are indexed by Young tableaux that combinatorially record the dimension of the cell. The result partly extends to other types ; see also when H = b . > 1The author was partly supported by NSF-DMS-1800773. Mini-Workshop: Degeneration Techniques in Representation Theory 2881 We give open questions, including: find a closed formula for the dimension or num-ber of components; which components are singular and what
{ "page_id": null, "source": 7361, "title": "from dpo" }
kinds of singularities do they have; characterize the pieces of cells in each component; identify which permutation flags are in the closure of a given Hessenberg Schubert cell. Equivariant cohomology of Hessenberg varieties. If X is regular semisimple (namely X has n distinct eigenvalues), we have special tools to compute the equi-variant cohomology of Hess (X, h ). The main tool is often called GKM theory after Goresky-Kottwitz-MacPherson , though many people contributed to its creation [2, 5, 12]. After describing GKM theory and generalizations, we specialize to the equivariant cohomology of regular semisimple Hessenberg varieties. Let H be a Hessenberg space and let ( ij ) be the transposition switching i and j. Let GH = ( V, E H ) be the graph whose vertices are permutations Sn and edges w ↔ w(ij ) if entries ( i, j ) and ( j, i ) are both free in H. Each edge w ↔ (ij )w is labeled with the polynomial ti − tj . Note that left-multiplication gives edge-labels while right-multiplication determines if an edge is in the graph. > h= (1 ,2,3) > tttttt > (13) e(132) (123) (12) (23) > h= (2 ,2,3) > tttttt❩❩✚✚ > (13) e(132) (123) (12) (23) > h= (2 ,3,3) > tttttt✚✚✚✚❩❩❩❩ > (13) e(132) (123) (12) (23) > h= (3 ,3,3) > tttttt✚✚✚✚✚✚✚✚❩❩❩❩❩❩❩❩ > (13) e(132) (123) (12) (23) Figure 1. Graphs for H∗ > T (Hess (X, h )). Label slope 1 edges by t1 − t2, slope −1 by t2 − t3, and vertical by t1 − t3 Theorem 1 (Tymoczko, ) . Let GH be the edge-labeled graph above. The equi-variant cohomology of the regular semisimple Hessenberg variety Hess (X, H ) is: Hess (X, H ) ∼= { p ∈ C[t1, . . . ,
{ "page_id": null, "source": 7361, "title": "from dpo" }
t n]n! : if w ↔ (ij )w is an edge in EH then ( ti − tj )|(pw − p(ij )w ) } We describe methods and questions about bases for H∗ > T (Hess (X, h ) like: • how to interpret “upper-triangular bases” for equivariant cohomology • a formula for the upper-triangular (Schubert) basis of H∗ > T (G/B ) [1, Ap-pendix D], • when there is a unique homogeneous upper-triangular basis [16, 19] • Question: what can be said about other kinds of bases, e.g. symmetrized? • Question: what is a formula for a basis of Hess (X, h )? 2882 Oberwolfach Report 46/2019 Representations and Hessenberg varieties. There are two different Sn ac-tions on the equivariant cohomology of GL n(C)/B arising from left-multiplication and right-multiplication by permutations on the vertices of the graph Gg . These give two different representations on equivariant cohomology. Only one restricts to Hessenberg varieties: the left-multiplication action . A simple transposition ( i, i + 1) acts on an arbitrary p ∈ H∗ > T (Hess (X, h )) by (( i, i + 1) · p)w = ( i, i + 1)( p(i,i +1) w )where ( i, i + 1) acts on polynomials by exchanging ti and ti+1 . Extend to the action of an arbitrary permutation by composing simple transpositions. The left-action and right-action of Sn on H∗ > T (GL n(C)/B ) induce different rep-resentations on ordinary cohomology: one is trivial while the other contains the sign representation. The right action is called the Springer representation and is associated to Hess (X, b). MacDonald describes the Springer representation as: (1) Poincar´ e polynomial of Hess (N, b) = ∑ > partitions λ (rank of irrep. λ) ˜Kλμ (N )(q)where N is nilpotent, μ(N )
{ "page_id": null, "source": 7361, "title": "from dpo" }
is its Jordan type, and ˜Kλμ (N )(q) is a particular normalization of Kostka-Foulkes polynomials [13, III. Sect.7, 7.11 and Ex. 6]. The left action, which descends to Hessenberg varieties, is the monodromy ac-tion . A version of Equation (1) applies, replacing b with H and replacing the rank of λ with the multiplicity of λ in the Sn-representation on H∗ > T (Hess (X, H )) . This representation on H∗ > T (Hess (X, H )) is particularly important because it coincides with a combinatorial representation at the heart of the so-called Stanley-Stembridge conjecture. Brosnan-Chow and Guay-Paquet [4, 10] recently proved this, after a conjecture of Shareshian-Wachs. It suggests a geometric approach to the Stanley-Stembridge conjecture through analysis of the representation on H∗ > T (Hess (X, H )). Another talk in this workshop will give more details. References > H. H. Andersen, J. C. Jantzen, and W. Soergel. Representations of quantum groups at a pth root of unity and of semisimple groups in characteristic p: independence of p.Ast´ erisque ,(220):321, 1994. M. F. Atiyah and R. Bott. The moment map and equivariant cohomology. Topology , 23(1):1– 28, 1984. Sara C. Billey. Kostant polynomials and the cohomology ring for G/B .Duke Math. J. ,96(1):205–224, 1999. Patrick Brosnan and Timothy Y. Chow. Unit interval orders and the dot action on the cohomology of regular semisimple Hessenberg varieties. Adv. Math. , 329:955–1001, 2018. Theodore Chang and Tor Skjelbred. The topological Schur lemma and related results. Ann. of Math. (2) , 100:307–321, 1974. F. De Mari, C. Procesi, and M. A. Shayman. Hessenberg varieties. Trans. Amer. Math. Soc. , 332(2):529–534, 1992. Filippo De Mari and Mark A. Shayman. Generalized Eulerian numbers and the topology of the Hessenberg variety of a matrix. Acta Appl. Math.
{ "page_id": null, "source": 7361, "title": "from dpo" }
, 12(3):213–235, 1988. Lucas Fresse. Betti numbers of Springer fibers in type A.J. Algebra , 322(7):2566–2579, 2009. Mini-Workshop: Degeneration Techniques in Representation Theory 2883 Mark Goresky, Robert Kottwitz, and Robert MacPherson. Equivariant cohomology, Koszul duality, and the localization theorem. Invent. Math. , 131(1):25–83, 1998. Mathieu Guay-Paquet. A modular relation for the chromatic symmetric functions of (3+1)-free posets. arXiv:1306.2400 . Megumi Harada and Julianna Tymoczko. Poset pinball, GKM-compatible subspaces, and Hessenberg varieties. J. Math. Soc. Japan , 69(3):945–994, 2017. Frances Clare Kirwan. Cohomology of quotients in symplectic and algebraic geometry , vol-ume 31 of Mathematical Notes . Princeton University Press, Princeton, NJ, 1984. I. G. Macdonald. Symmetric functions and Hall polynomials . Oxford Mathematical Mono-graphs. The Clarendon Press Oxford University Press, New York, second edition, 1995. With contributions by A. Zelevinsky, Oxford Science Publications. Robert MacPherson and Julianna Tymoczko. Generalized Springer-Hessenberg representa-tions. In preparation . Martha Precup. Affine pavings of Hessenberg varieties for semisimple groups. Selecta Math. (N.S.) , 19(4):903–922, 2013. Julianna S. Tymoczko. An introduction to equivariant cohomology and homology, following Goresky, Kottwitz, and MacPherson. In Snowbird lectures in algebraic geometry , volume 388 of Contemp. Math. , pages 169–188. Amer. Math. Soc., Providence, RI, 2005. Julianna S. Tymoczko. Linear conditions imposed on flag varieties. Amer. J. Math. ,128(6):1587–1604, 2006. Julianna S. Tymoczko. Permutation actions on equivariant cohomology of flag varieties. In Toric topology , volume 460 of Contemp. Math. , pages 365–384. Amer. Math. Soc., Provi-dence, RI, 2008. Julianna S. Tymoczko. Permutation representations on Schubert varieties. Amer. J. Math. ,130(5):1171–1194, 2008. Khovanskii bases in three settings Christopher Manon In this series of three lectures we discuss computational aspects of valuations, the existence of a degeneration to a variety of complexity one for any irreducible, reduced projective variety, and a
{ "page_id": null, "source": 7361, "title": "from dpo" }
classification theorem for flat toric families of finite type. These three topics share Khovanskii bases as a common feature. Khovanskii bases and computations in algebras (joint with Kiumars Kaveh) The theory of Khovanskii bases generalizes SAGBI bases, which is itself an ana-logue of the theory of Gr¨ obner bases for subalgebras of polynomial rings. Let k[x] = k[x1, . . . , x n] be a polynomial ring over an algebraically closed field k, and let ≺ be a monomial order. Following a standard treatment of Gr¨ obner theory (, ), we have the initial form in ≺(f ) of a polynomial f = ∑ℓi=1 cixαi , and the initial ideal in ≺(I) = 〈{ xα | cxα = in ≺(f ), f ∈ I}〉 of an ideal I ⊂ k[x]. In what follows we assume that I is homogeneous with respect to some positive grading on k[x] (say xi has degree di ∈ Z>0). Gr¨ obner bases and the Gr¨ obner fan are defined as usual. A Gr¨ obner basis allows for the resolution of the ideal membership prob-lem, and Buchberger’s algorithm allows us to procedurally expand any generating set of I to a Gr¨ obner basis. In , Robbiano and Sweedler defined a SAGBI basis B ⊂ A for a subalgebra A ⊂ k[x] to be a set whose initial forms generate the initial 2884 Oberwolfach Report 46/2019 algebra in ≺(A). If a finite SAGBI basis B ⊂ A exists, the subduction algorithm () represents any p ∈ A as a polynomial in B. Moreover, any generating set of A can be expanded to a SAGBI basis (); these are the SAGBI analogues of the division algorithm and Buchberger’s algorithm, respectively. Notably, in ≺(A)is an affine semigroup algebra, so Proj( in ≺(A)) is a toric variety. Unfortunately there
{ "page_id": null, "source": 7361, "title": "from dpo" }
are subalgebras and monomial orders with no finite SAGBI basis. To remedy this situation, we generalize to the setting of a valuation v : A \ { 0} → Zd, where Zd is equipped with a group order ≺. The associated graded algebra gr v(A) plays the role of in ≺(A). We define a Khovanskii basis to be a set B ⊂ A whose image ¯B ⊂ gr v(A) is a generating set. In it is shown that the subduction algorithm exists in this setting (actually for quasivaluations), and that any generating set can be expanded to a finite Khovanskii basis, provided one exists. Moreover, a theorem of Anderson shows that a toric degeneration also exists in this setting, provided v is a full rank valuation. The structure of the set of quasivaluations with finite Khovanskii bases comes from Gr¨ obner theory. Let I ⊂ k[x] be the ideal which vanishes on a generating set B = {b1, . . . , b n} ⊂ A. We let Γ = ( Zd, ≺), and Σ Γ(I) ⊂ Γn be the Gr¨ obner complex of I . Furthermore, we take KΓ(I) ⊆ ΣΓ(I) to be the set of points of the form w = ( v(b1), . . . , v(bn)), where v : A → Γ is some quasivaluation. Finally, we let Trop Γ(I) be the Γ-tropical variety of I, this is the set of w ∈ Γn where in w(I)contains no monomials (the tropical variety ). We have the following inclusions of complexes: Trop Γ(I) ⊆ KΓ(I) ⊆ ΣΓ(I). It is shown in that KΓ(I) can be identified with the set of quasivaluations with Khovanskii basis B. Likewise, the points w ∈ Trop Γ(I) with in w (I) a prime ideal correspond precisely to the valuations
{ "page_id": null, "source": 7361, "title": "from dpo" }
on A with Khovanskii basis B. In it is also shown that the initial ideals found in Σ Γ(I) are the same as those found by classical Gr¨ obner theory (ie Zd = Z). The following is a consequence. Theorem 1. [Kaveh-M] A positively graded domain A has a full rank valuation v : A \ { 0} → Zd with finite Khovanskii basis B ⊂ A if and only if the tropical variety Trop( I) contains a full dimensional open cell C with in C (I) a prime ideal. Existence of finite Khovanskii bases (joint with Kiumars Kaveh and Takuya Murata) By Theorem 1, finding a full rank valuation on an algebra A is equivalent to finding a full-dimensional prime cone C ⊂ Trop( I). Moreover, Anderson’s theorem and imply that these conditions are both equivalent to the existence of a homogeneous flat degeneration Spec( A) → Spec( k[S]), where S ⊂ Zd is a finitely generated semigroup. In this context, a toric degeneration X → X0 is a flat family π : X → A1(k), where X is equipped with a Gm(k) action which is intertwined with the standard action on A1(k) by π, with π−1(0) = X0 a toric variety, and π−1(C) = X for any C 6 = 0. There are positively graded algebras with no toric degenerations. Following [9, Section 3], the section ring RD = ⊕ > n≥0 H0(C, O(nD )) of a divisor D on a smooth curve C carries a homogeneous rank 2 valuation with finite Khovanskii basis if and Mini-Workshop: Degeneration Techniques in Representation Theory 2885 only if nD ∼ mP for some n, m > 0 and P ∈ C . Ilten and Wr¨ obel have also constructed examples out of non-normal rational curves. In response,
{ "page_id": null, "source": 7361, "title": "from dpo" }
we let the special fiber of the degeneration belong to a larger class of varieties. The complexity of a variety X equipped with an effective action by an algebraic torus T is dim (X) − rank (T ). In this way, complexity-1 varieties are a natural relaxation of toric varieties. Altmann and Hausen have constructed a combinatorial theory of varieties of arbitrary complexity; in particular complexity-1 varieties are roughly captured by polyhedral data and the geometry of curves. The following theorem from says that the coordinate ring of projective variety can be degenerated to the coordinate ring of a complexity-1 variety. Theorem 2. [Kaveh-M-Murata] Let A be a positively graded k-domain of dimen-sion d, then there is a homogeneous valuation v : A \ { 0} → Zd−1 of rank d − 1 with a finite Khovanskii basis. Roughly speaking, the two main ingredients of Theorem 2 are a technical result on the finite generation of Rees algebras of symbolic powers of height 1 prime ideals, and Bertini’s theorem. Khovanskii bases for toric families (joint with Kiumars Kaveh) Fix a valuation v : A → Zd with finite Khovanskii basis, let σ ⊂ Trop( I) be the corresponding prime cone, and let N be the lattice generated by the integer points in σ ∩ Zn. A construction from shows that there is a flat TN -family π : X → Yσ , where Yσ is the affine toric variety associated to σ ⊂ NQ. Just as a toric degeneration over A1(k) corresponds to the rank 1 valuation given by a point in a prime cone in Trop( I), the family X corresponds to a valuation w : A → O σ ,where Oσ is the semialgebra of N -integral piecewise-linear functions on the cone σ.If two such
{ "page_id": null, "source": 7361, "title": "from dpo" }
prime cones σ1 and σ2 share a facet σ1 ∩σ2 = τ , one can then consider a corresponding valuation into OΣ, where Σ is the fan defined by σ1, σ 2, τ ; this is the natural setting for considering mutations between the Newton-Okounkov bodies associated to the prime cones σ1, σ 2. With these constructions as motivation, it is natural to consider valuations w : A → O N , where ON is the semifield of piecewise-linear functions on the lattice N .A valuation w : A → O N defines a rank 1 valuation for each ρ ∈ N given by evaluation: wρ(f ) = w(f )[ ρ]. We say B ⊂ A is a Khovanskii basis of w if B is a Khovanskii basis of wρ for each ρ ∈ N . The following is a result in , it is a generalization of the equivalence between toric degenerations and valuations with finite Khovanskii bases in the classical setting. Theorem 3. [Kaveh-M] For every valuation w : A → O N with finite Khovanskii basis, there is finite complete fan Σ and a corresponding flat affine TN -family π : X (v) → YΣ of finite type with reduced, irreducible fibers and general fiber Spec( A). Moreover, every such family arrises this way. Theorem 3 is not stated as 1 − 1 correspondence between flat toric families and valuations because there is an indeterminacy in the choice of the fan Σ. If we 2886 Oberwolfach Report 46/2019 specialize to the case where A is a polynomial ring and the Khovanskii basis is a set of linear forms, Theorem 3 recovers Klyachko’s classification of toric vector bundles . In particular, one can take the Khovanskii basis to be the representable matroid constructed by Di Rocco, Jabbusch, and
{ "page_id": null, "source": 7361, "title": "from dpo" }
Smith in . Theorem 3 and its corollaries suggest a new way to construct toric vector bun-dles, and other toric families. Let k[TN ] be the coordinate ring of the torus TN and wN : k[TN ] → O N be the canonical valuation which sends a Laurent polyno-mial p(t) = ∑ℓi=1 citαi to the support function wN (p) = min {αi | ci 6 = 0 } of its Newton polytope. This valuation immediately extends to the rational functions k(TN ). Now, any affine variety X equipped with a k(TN ) point i : k[X] → k(TN )has a valuation i∗wN : k[X] → O N which potentially defines a toric family with general fiber X. In this way, we produce toric vector bundles by solving linear equations in the field k(TN ) and evaluating the result with wN . One can use similar techniques to show that the support of functions of the m-faces of any n-simplex are a ON -point on the tropical Grassmannian variety of m-planes in n-space, see . References K. Altmann and J. Hausen, Polyhedral divisors and algebraic torus actions . Math. Ann., 334(3) (2006), 557–607. D. Anderson, Okounkov bodies and toric degenerations . Math. Ann., 356(3) (2013), 1183– 1202. D. A. Cox and J. Little and D. O’Shea, Ideals, Varieties, and Algorithms . Undergraduate Texts in Mathematics. Spring, Cham, fourth edition (2015). S. Di Rocco and K. Jabbusch and G. Smith, Toric Vector Bundles and Parliaments of Polytopes . Trans. Amer. Math. Soc., 370(11) (2018), 7715–7741. N. Ilten and M. Wr¨ obel, Khovanskii-finite valuations, rational curves, and torus actions .arXiv:1807.08780[math.AG]. A. A. Klyachko, Equivariant bundles over toric varieties . Izv. Akad. Nauk SSR Ser. Mat., 53(5) (1989), 1001–1039. K. Kaveh and C. Manon, Khovanskii bases, higher
{ "page_id": null, "source": 7361, "title": "from dpo" }
rank valuations, and tropical geometry .SIAM J. Appl. Algebra Geometry, 3(2) (2019), 292–336. K. Kaveh and C. Manon, Toric bundles, valuations, and tropical geometry over the semifield of piecewise linear functions . arXiv:1907.00543[math.AG]. K. Kaveh and C. Manon and T. Murata, On degenerations of projective varieties to complexity-one T-varieties . arXiv:1708.02698[math.AG]. L. Robbiano and M. Sweedler, Subalgebra bases . In Commutative Algebra (Salvador, 1988) ,volume 1430 of Lecture Notes in Math. , 61–87. Springer, Berlin, 1990. B. Sturmfels, Gr¨ obner bases and convex polytopes , volume 8 of University Lecture Series. American Math Society, Providence, RI, 1996. Mini-Workshop: Degeneration Techniques in Representation Theory 2887 Some open problems on degerate flag varieties Markus Reineke We review some of the main results of [1, 2, 3] on so-called linear degenerations of SL n+1 -flag varieties. We formulate several open problems concerning their geometry, and speculate on potential convolution-type constructions and their representation-theoretic properties. Let V be a complex vector space of dimension n+ 1 with a fixed basis v1, . . . , v n+1 .For a subset I ⊂ { 1, . . . , n +1 }, we denote by pr i ∈ End( V ) the projection operator defined by pr I (vi) = 0 for i ∈ I and pr I (vi) = vi for i 6 ∈ I. We define the degenerate flag variety Fl a(V ) = {(U1, . . . , U n) ∈ > n ∏ > i=1 Gr i(V ) | pr i(Ui) ⊂ Ui+1 for i = 1 , . . . , n − 1}. It is an irreducible normal variety of dimension n(n + 1) /2 which is a flat degener-ation of the variety Fl( V ) of complete flags in V . It is acted
{ "page_id": null, "source": 7361, "title": "from dpo" }
upon by a unipotent group Ga with finitely many orbits. The maximal torus T of GL( V ) scaling the given basis of V admits a one-parameter subgroup whose fixed points are tuples of coordinate subspaces indexed by {I∗ = ( I1, . . . , I n) | Ii ⊂ { 1, . . . , n + 1 }, |Ii| = i, I i \ { i + 1 } ⊂ Ii+1 }, and such that the attractors of the fixed points are precisely the Ga-orbits. Problem 1: Describe the closure relation between Ga-orbits in their parametriza-tion via tuples of sets I∗. What are the (minimal) singularities in the closures of orbits? How to describe the intersection cohomology complexes on orbit closures (see Problem 3below)? The Euler characteristic of Fl a(V ) can be computed as the number of cells, which equals the ( n + 1)-st Genocchi number. One compact formula for this is χ(Fl a(V )) = ∑ > f∗ ( ∏ > i (fi + 1) 2r(f ) ) , where the sum ranges over all Motzkin paths of length n + 1, that is, tuples f∗ = (0 = f0, f 1, . . . , f n, f n+1 = 0) of non-negative integers such that fi+1 − fi ∈{− 1, 0, 1} for all i, and r(f ) denotes the number of rises of f∗, that is, indices i such that fi+1 = fi + 1. This sum-of-squares type formula suggests the following: Problem 2: Define a convolution-type algebra structure on HBM > ∗ (Fl a(V )) , mak-ing it into a semisimple algebra whose irreducible representations are naturally parametrized by Motzkin paths. The variety Fl a(V ) is a special case of a principal quiver Grassmannian, that is, a variety parametrizing
{ "page_id": null, "source": 7361, "title": "from dpo" }
subrepresentations U of a representation P ⊕ I of a Dynkin quiver Q, where P is a projective representation, I is an injective representation, 2888 Oberwolfach Report 46/2019 and the class [ U ] of U in the Grothendieck group of Q equals [ P ]. Namely, the variety Fl a(V ) arises when Q is a linearly oriented type An quiver, P = CQ and I = ( CQ)∗. For every principal quiver Grassmannian, the group Aut Q(P ⊕ I) acts on Gr P with finitely many orbits. Assuming that all these facts already work over finite fields, we can consider spaces of functions invariant with respect to this group action: Problem 3: Define a Hall-algebra type multiplication on A = ⊕ > [P],[I],e QAut Q(P ⊕I)Gr d(P ⊕ I)making it into a graded associative algebra. Let Uq(b+) be the quantized envelop-ing algebra of the Borel subalgebra of sl n+1 (C); then multiplication should induce an isomorphism of graded vector spaces Uq (b+) ⊗ U q(b+) → A. For every orbit O in some Gr d(P ⊕ I), the function associating to a point U in O the Poincar´ epolynomial of the stalk over U of the intersection cohomology sheaves on the orbit closure of O gives an element fO of A; the collection of all fO should provide A with a canonical basis. How is this basis related to variants of Lusztig’s canonical basis in quantized enveloping algebras? The maximal flat linear degeneration of the flag variety Fl( V ) is defined by FL mf (V ) = {(U1, . . . , U n) ∈ > n ∏ > i=1 Gr i(V ) | pr i,i +1 (Ui) ⊂ Ui+1 for i = 1 , . . . , n − 1}. It is
{ "page_id": null, "source": 7361, "title": "from dpo" }
a locally complete intersection variety which is equidimensional, and it is a flat degeneration of both Fl( V ) and of Fl a(V ). Its irreducible components are naturally parametrized by non-crossing arc dia-grams: an arc diagram is a subset A of {(i, j ) | 1 ≤ i < j ≤ n}; it is called non-crossing if there is no pair ( i, j ), (k, l ) ∈ A such that i ≤ k < j ≤ l. Non-crossing arc diagrams are counted by the n-th Catalan number. Given a non-crossing arc diagram A, define r(A)ij = i − | arcs starting in {1, . . . , i } and ending in {i + 1 , . . . , j }| . Then the closure C(A) of the set of all ( U1, . . . , U n) ∈ Fl mf (V ) such that rk(pr [i,j ] : Ui → Uj ) = r(A)i,j for all i < j is an irreducible component of Fl mf (V ), and all components arise in this way. Problem 4: Are the components C(A) normal? What is their singular locus? Do they admit natural closed embeddings into Schubert varieties? When are two such components isomorphic? As for Fl a(V ), also Fl mf (V ) can be realized as a quiver Grassmannian, namely as Gr CQ for M = rad( CQ) ⊕ CQ/ rad( CQ) ⊕ (CQ)∗, thus the automorphism group Aut Q(M ) acts naturally on Fl mf (V ). Mini-Workshop: Degeneration Techniques in Representation Theory 2889 Problem 5: Are there only finitely many orbits for this action? If yes, how can they be parametrized? What is the closure relation? Both Fl a(V ) and Fl mf (V ) are members of a flat family
{ "page_id": null, "source": 7361, "title": "from dpo" }