content
stringlengths
86
994k
meta
stringlengths
288
619
How To Determine The Tube Length For A Telescop | Telescopic Tube How To Determine The Tube Length For A Telescop How To Determine The Tube Length For A Telescop. disk within the focal depth of the telescope (<0.030”) which indicates that the baseplate is within 0.0005” (60:1 lever arm ratio) of the ideal location. 3.5) If the baseplate is not. The resulting focal length can be calculated by multiplying the telescope’s focal length by the focal reducer’s multiplier. For example, a telescope with a surce: youtube.com In order to determine the tube length for a telescope, there are several factors to consider. The type of telescope and the intended use are the two most important. Telescopes come in a variety of types, including refractors, reflectors, and catadioptric designs. Each of these types will require different tube lengths for optimal performance. Additionally, the intended use of the telescope will also need to be taken into account. The most common type of telescope is the refractor. Refractors have a tube length that is determined by the focal ratio. This is calculated by dividing the focal length by the aperture. For example, a telescope with a focal length of 1000mm and an aperture of 200mm would have a focal ratio of f/5. Generally, refractors with a focal ratio of f/6 or longer will require a tube length of at least Reflectors, such as Newtonian and Dobsonian telescopes, typically require a longer tube length than refractors. The tube length is determined by the desired field of view, or the amount of sky visible when looking through the telescope. A larger field of view requires a longer tube length, and a shorter tube length implies a smaller field of view. For example, a Newtonian telescope with a focal length of 1000mm and a field of view of 1° would require a tube length of at least 1300mm. Catadioptric telescopes, also known as Schmidt-Cassegrain or Maksutov-Cassegrain telescopes, are usually quite compact and require a much shorter tube length than refractors and reflectors. These telescopes are usually designed with a fixed focal length and have a focal ratio of f/10, which requires a tube length of around 500mm. The intended use of the telescope should also be taken into account when determining the tube length. For example, a telescope used for astrophotography will require a longer tube length than one used for visual observations. This is because astrophotography requires a longer focal length in order to capture finer details in the image. Generally, a telescope used for astrophotography should have a tube length of at least 1500mm. Once the type of telescope and intended use have been determined, the tube length can be calculated. If the telescope is a refractor, the focal ratio can be used to calculate the tube length. For reflectors, the field of view and focal length can be used. Catadioptric telescopes usually have a fixed focal ratio of f/10, which requires a tube length of around 500mm. For astrophotography, a tube length of at least 1500mm is recommended. In conclusion, the tube length for a telescope is determined by the type of telescope and the intended use. Refractors require a tube length that is determined by the focal ratio, while reflectors and catadioptric telescopes require a tube length that is determined by the field of view and focal length. For astrophotography, a tube length of at least 1500mm is recommended. Telescope Basics 2 (of 6): Learn to calculate magnification for a telescope/understand focal ratios Hosted by David Fuller of "Eyes on the Sky," this video discusses the basics of telescope magnification and focal ratio. Each concept is covered, guiding the viewer through how to calculate magnification of a telescope and eyepiece combination, and how to determine the focal ratio of a given telescope. An excellent primer for anyone wanting to understand more about telescopes. # tube the minimum height and adapter is 1.25 µ. In this configuration the distance from secondary mirror to focal plane in a 15 µ O.D. tube would have been 9.0625 µ (with. , How To Determine The Tube Length For A Telescop.
{"url":"https://telescopictube.com/how-to-determine-the-tube-length-for-a-telescop/","timestamp":"2024-11-08T11:42:38Z","content_type":"text/html","content_length":"87876","record_id":"<urn:uuid:b13b9b68-ce80-40ce-8715-3fe4cd0ddc85>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00882.warc.gz"}
What is Compound Interest? Compound interest is interest calculated on the principal amount invested, which is then added to the principal amount, and compounded again. It can be earned daily, weekly, monthly or yearly. Generally, the more times an amount is compounded, the more money a person can make. As long as a person leaves an interest earning account alone, by not removing money from it, he begins making more money on the investment (given a stable interest rate) because the money he earns is added back to the principle amount. It’s a simple fact that more money earning interest makes more money. Each time interest is compounded, the money earned gets added to the total. A similar result can be seen by someone raising rabbits. If two bunnies produced a litter, and the person kept all those bunnies, then he might end up with eight rabbits. The original bunnies would keep on breeding, as would the new litter, and more and more rabbits would be produced. Compound interest won’t be quite that dramatic, unless the person is investing huge sums of money. The important parallel is that the first pair of bunnies (the original investment) and their offspring (interest) now combine together to produce yet more rabbits, and as combined, they will produce a great deal more than if they were sold off and separated. Most investment firms, banks, and the like will state how often interest is compounded in an account. In some cases, the investment doesn’t compound, but earns what is called simple interest. This means that the investor only makes money on the amount he initially invested, and the profits are not reinvested to make more money. Individuals can figure out exactly how much an investment will be worth in a few years with a scientific calculator. They also need to know the initial investment amount (principal or p), the rate of interest, (r), the number of years they plan to allow the investment to sit (years or y), and the number of times per year the investment will compound (t). Investors should remember that only a portion of the interest would be earned each month, so the interest amount would have to be divided by the total times interest gets compounded each year (t). The formula is as follows: Total value = p(1 + r/t)^ty Putting this to work, in dollar amounts, someone might invest $10,000 US Dollars (USD) in a savings account that earns 5% interest per year and is compounded monthly. If the person leaves that money alone for five years, he could figure out exactly how much money he’d make in that time period, and the value of the account at the end of four years. The equation would look like this: 10,000(1 + 0.05/12)^12 X 5 = $12,833.59 If the investor only earned simple interest, at even 5.5% per year, he wouldn’t make that much money: 10,000(1 + .055 X 5) = $12,750.00 One reason to understand compound interest is because some accounts that earn simple interest offer a higher yearly interest rate. If the investment is long term, however, the investor may make more money with a lower interest rate that compounds. On the other hand, if an investor knows that he’ll be removing the money after a year or two, a higher interest rate that is not compounded may be a better investment, than an account that compounds the interest but has a lower rate. Investors also shouldn't be daunted by these formulas; anyone who has access to the Internet can find hundreds of sites that offer interest calculators, and most of them are very easy to use.
{"url":"https://www.smartcapitalmind.com/what-is-compound-interest.htm","timestamp":"2024-11-08T12:38:36Z","content_type":"text/html","content_length":"111644","record_id":"<urn:uuid:282c4967-4577-48c9-8c85-da938e41313a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00169.warc.gz"}
Lesson 23 Polynomial Identities (Part 1) • Let’s learn about polynomial identities. 23.1: Let’s Find Some Differences 1. Calculate the following differences: 1. \(30^2-29^2\) 2. \(41^2-40^2\) 3. \(18^2-17^2\) 2. What do you notice about these calculations? 23.2: A Closer Look at Differences 1. Clare thinks the difference between the squares of two consecutive integers will always be the sum of the two integers. Is she right? Explain or show your reasoning. Pause here for a class discussion. 2. Andre thinks the difference between the squares of two consecutive even integers will always be 4 times the sum of the two integers. Is he right? Explain or show your reasoning. Noah says that the difference of two cubes is always divisible by the difference of the two numbers. Do you agree with Noah? 23.3: That Expression is How Big? Apply the distributive property to rewrite the following expressions without parentheses, combining like terms where possible. What do you notice? 1. \((x - 1)(x + 1)\) 2. \((x - 1)(x^2+x+1)\) 3. \((x - 1)(x^3+x^2+x+1)\) 4. \((x - 1)(x^4+x^3+x^2+x+1)\) 5. \((x - 1)(x^{20} + x^{19} + . . . +x+1)\) In earlier grades we learned how to do things like apply the distributive property and combine like terms to rewrite expressions in different ways. For example, \((2x+1)(x-3) = 2x^2-5x-3\). The new algebraic expression on the right comes from writing the original on the left in a different way. More precisely, the expression on the left has the same value for all possible inputs \(x\) as the expression on the right, making them equivalent expressions. This is an example of an identity. Two examples of identities seen in earlier grades are: \(\displaystyle a^2-b^2=(a+b)(a-b)\) \(\displaystyle (a+b)^2=a^2+2ab+b^2\) For all possible values of \(a\) and \(b\), the left and right sides of these equations are equal. In fact, the first of these identities can be extended to show that for any positive integer value of \(n\) the expression \(\displaystyle (x - 1)(x^{n-1} + x^{n-2} + . . . +x^2+x+1)\) is equivalent to \(\displaystyle (x^{n}-1)\) • identity An equation which is true for all values of the variables in it.
{"url":"https://curriculum.illustrativemathematics.org/HS/students/3/2/23/index.html","timestamp":"2024-11-03T05:55:39Z","content_type":"text/html","content_length":"75716","record_id":"<urn:uuid:80f86788-cf55-40ac-ac74-c30c6f44128e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00695.warc.gz"}
AIxIA 2022 – Advances in Artificial Intelligence: XXIst International Conference of the Italian Association for Artificial Intelligence AIxIA 2022, Udine, Italy, November 28 – December 2, 2022 Proceedings 3031271807, 9783031271809 - EBIN.PUB (2024) LNAI 13796 Agostino Dovier Angelo Montanari Andrea Orlandini (Eds.) AIxIA 2022 – Advances in Artificial Intelligence XXIst International Conference of the Italian Association for Artificial Intelligence AIxIA 2022, Udine, Italy, November 28 – December 2, 2022 Lecture Notes in Computer Science Lecture Notes in Artificial Intelligence Founding Editor Jörg Siekmann Series Editors Randy Goebel, University of Alberta, Edmonton, Canada Wolfgang Wahlster, DFKI, Berlin, Germany Zhi-Hua Zhou, Nanjing University, Nanjing, China The series Lecture Notes in Artificial Intelligence (LNAI) was established in 1988 as a topical subseries of LNCS devoted to artificial intelligence. The series publishes state-of-the-art research results at a high level. As with the LNCS mother series, the mission of the series is to serve the international R & D community by providing an invaluable service, mainly focused on the publication of conference and workshop proceedings and postproceedings. Agostino Dovier · Angelo Montanari · Andrea Orlandini Editors AIxIA 2022 – Advances in Artificial Intelligence XXIst International Conference of the Italian Association for Artificial Intelligence AIxIA 2022, Udine, Italy, November 28 – December 2, 2022 Editors Agostino Dovier University of Udine Udine, Italy Angelo Montanari University of Udine Udine, Italy Andrea Orlandini National Research Council (CNR-ISTC) Rome, Italy ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Artificial Intelligence ISBN 978-3-031-27180-9 ISBN 978-3-031-27181-6 (eBook) https://doi.org/10.1007/978-3-031-27181-6 LNCS Sublibrary: SL7 – Artificial Intelligence © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Chapters “Approximate Inference in Probabilistic Answer Set Programming for Statistical Probabilities” and “MAP Inference in Probabilistic Answer Set Programs” are licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/). For further details see license information in the chapters. This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland This volume contains the proceeding of the 21st International Conference of the Italian Association for Artificial Intelligence, referred to for short as AIxIA. AIxIA is very active in organizing scientific initiatives as well as events for the dissemination of Artificial Intelligence in industry, society, and schools. Among these activities, a scientific international conference has been organized every two years since 1991 and then yearly since 2015. In the last two years, due to the Covid-19 pandemic, the conference was organized in remote mode in Torino and Milano (LNCS 13196 and 12414). Previously, it was organized as a standard conference in Rende (2019: LNCS 11946), Trento (2018: LNCS 11298), Bari (2017: LNCS 10640), Genova (2016: LNCS 10037), Ferrara (2015: LNCS 9336), Torino (2013: LNCS 8249), Palermo (2011, LNCS 6934), Reggio Emilia (2009: LNCS 5883), Roma (2007: LNCS 4733), Milano (2005: LNCS 3673), Pisa (2003: LNCS 2929), Bari (2001: LNCS 2175), Bologna (1999: LNCS 1792), Roma (1997: LNCS 1321), Firenze (1995: LNCS 992), Torino (1993: LNCS 728), Palermo (1991: LNCS 549), and Trento (1989). The recent positive evolution of the pandemic disease allowed us to take the risk of organizing an attended conference. This seems to have been much appreciated by the community as more than 350 people attended the meeting. As for the numbers, 54 research papers were submitted to the conference and evaluated by at least three reviewers; moreover, 29 discussion papers were submitted and 16 were selected for presentation at the conference. 227 authors were involved, 155 from Italy, 13 from France, 12 from USA, 8 from India, 7 from UK, and 32 from other countries. Among the regular papers, 33 papers were selected for publication in these proceedings. The conference program included two prestigious keynote speakers, namely: Subbarao Kambhampati (Arizona State University): Symbols as a Lingua Franca for Supporting Human-AI Interaction For Explainable and Advisable AI Systems; Georg Gottlob (University of Oxford): My adventures with Datalog: Walking the thin line between theory and practice (with a paper also included in the proceedings). In addition, it offered three tutorials on hot research topics: Ferdinando Fioretto (Syracuse University): Endto-end constrained optimization learning; Antonio Lieto (University of Turin): Cognitive Design for Artificial Minds; Angelo Oddi, Riccardo Rasconi (CNR-ISTC, Rome), and Marco Baioletti (University of Perugia): Quantum computing and planning. AIxIA 2022 also covered many aspects of theoretical and applied AI through 17 colocated workshops devoted to specific topics and bringing together the corresponding AI communities. The workshops chairs were Andrea Formisano and Alberto Finzi. Thus, in parallel to the main program, the conference features the following workshops, for a total of 175 accepted regular papers, plus a number of invited talks: – 6th Workshop on Advances in Argumentation in Artificial Intelligence; – 11th Workshop on Machine Learning and Data Mining; – 4th Workshop on Artificial Intelligence and fOrmal VERification, Logic, Automata, and sYnthesis; – 1st Workshop on Artificial Intelligence for Cultural Heritage; – 1st Workshop on Artificial Intelligence and Creativity; – 3rd Italian Workshop on Artificial Intelligence for an Ageing Society; – R.i.C.e.R.c.A: RCRA Incontri E Confronti; – 10th Italian Workshop on Planning and Scheduling; – 9th Italian Workshop on Artificial Intelligence and Robotics; – 1st Workshop on Artificial Intelligence for Healthcare; – 6th Workshop on Natural Language for Artificial Intelligence; – 3rd Workshop on Explainable Artificial Intelligence; – 1st Workshop on Artificial Intelligence for Human Machine Interaction; – 1st Workshop on Artificial Intelligence for Public Administration; – 2nd Italian Workshop on Artificial Intelligence and Applications for Business and Industries; – 1st Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming; – 1st Workshop on Strategies, Prediction, Interaction, and Reasoning in Italy. Finally, a doctoral consortium with 20 presentations from PhD students was organized on the first day of the conference. The doctoral consortium chairs were Gabriella Cortellessa and Luca Di Gaspero. The organization benefited from “Platinum” sponsorships from EUSTEMA, Danieli Automation, Generali, Intesa Sanpaolo, and TechEdge, “Gold” sponsorships from OverIT, Previnet, and u-blox, and “Bronze” sponsorships from BeanTech, SMC, and Confindustria Udine. The conference was kindly supported by the Artificial Intelligence Journal, and received the patronage of the European Commission, the Friuli Venezia Giulia Region, and the Municipality of Udine. A special session devoted to industry and AI was organized by Giuseppe Serra and Fabio Mercorio. Last but not least, we would like to thank the organizing committee, in particular Andrea Brunello and Nicola Saccomanno, who did a huge amount of high-quality work. Moreover, we thank the web master Nicola Gigante, our colleagues and friends Dario Della Monica and Gabriele Puppis, and all the PhD students for their help in the practical management of the conference. Finally, we thank the board of directors of the AIxIA for their constant support, the Rector of the University of Udine for the opportunity to organize the conference in the new building of the scientific library, and the technical staff of University of Udine (in particular, Renato Spoletti, Stefano Bonomi, and Ester Orlandi) for their precious work. December 2022 Agostino Dovier Angelo Montanari Andrea Orlandini General Chair Angelo Montanari University of Udine, Italy Program Committee Chairs Agostino Dovier Andrea Orlandini University of Udine, Italy National Research Council (CNR-ISTC), Italy Program Committee Davide Bacciu Marco Baioletti Matteo Baldoni Stefania Bandini Adriano Barra Sebastiano Battiato Stefano Bistarelli Stefano Borgo Francesco Calimeri Alberto Casagrande Antonio Chella Alessandro Cimatti Gabriella Cortellessa Stefania Costantini Alessandro Dal Palù Dario Della Monica Stefano Ferilli Alberto Finzi Fabio Fioravanti Andrea Formisano University of Pisa, Italy University of Perugia, Italy University of Turin, Italy Complex Systems & AI Research Center,Italy University of Salento, Italy University of Catania, Italy University of Perugia, Italy National Research Council (CNR-ISTC), Italy University of Calabria, Italy University of Trieste, Italy University of Palermo, Italy Fondazione Bruno Kessler, Italy National Research Council (CNR-ISTC), Italy University of Aquila, Italy University of Parma, Italy University of Udine, Italy University of Bari, Italy University of Naples “Federico II”, Italy University of Chieti-Pescara, Italy University of Udine, Italy Salvatore Gaglio Chiara Ghidini Gianluigi Greco Luca Iocchi Antonio Lieto Francesca A. Lisi Michele Loreti Fabio Mercorio Angelo Oddi Andrea Omicini Luigi Palopoli Filippo Palumbo Fabio Patrizi Luigi Portinale Gian Luca Pozzato Luca Pulina Alessandro Raffetà Riccardo Rasconi Francesco Ricca Fabrizio Riguzzi Marco Roveri Salvatore Ruggieri Enrico Scala Giovanni Semeraro Luciano Serafini Gianluca Torta Mauro Vallati Eloisa Vargiu University of Palermo, Italy Fondazione Bruno Kessler, Italy University of Calabria, Italy University of Rome “Sapienza”, Italy University of Turin, Italy University of Bari, Italy University of Camerino, Italy University of Milan Bicocca, Italy National Research Council (CNR-ISTC), Italy University of Bologna “Alma Mater Studiorum”, Italy University of Trento, Italy National Research Council (CNR-ISTI), Italy University of Rome “Sapienza”, Italy University of Piemonte Orientale, Italy University of Turin, Italy University of Sassari, Italy University of Venezia “Ca’ Foscari”, Italy National Research Council (CNR-ISTC), Italy University of Calabria, Italy University of Ferrara, Italy University of Trento, Italy University of Pisa, Italy University of Brescia, Italy University of Bari, Italy Fondazione Bruno Kessler, Italy University of Turin, Italy University of Huddersfield, UK CETaqua Water Technology Center, Spain Additional Reviewers Carlo Adornetto Damiano Azzolini Daniele Baccega Patrizio Bellan Gloria Beraldo Luigi Bonassi Fabio Buttussi Pierluigi Cassotti Federico Cerutti Riccardo De Benedictis Alessandro De Paola Francesco Fabiano Francesco Faloci Antonino Fiannaca Federico Fogolari Francesca Fracasso Francesca Gasparini Francesco Guarnera Dario Guidotti Eleonora Iotti Andrea Iovine Maria Mannone Marta Marchiori Manerba Claudio Masolo Ivan Mercanti Laura Pandolfo Marco Polignano Andrea Pugnana Alessandro Quarta Chiara Renso Francesco Santini Laura State Carlo Taticchi Alessandro Umbrico Alberto Valese Hybrid Approaches The PSyKE Technology for Trustworthy Artificial Intelligence . . . . . . . . . . . . . . . Roberta Calegari and Federico Sabbatini A Declarative Approach to Contrast Pattern Mining . . . . . . . . . . . . . . . . . . . . . . . . Francesca Alessandra Lisi and Gioacchino Sterlicchio Graphs and Networks Approximate Inference in Probabilistic Answer Set Programming for Statistical Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Damiano Azzolini, Elena Bellodi, and Fabrizio Riguzzi Decision Trees with a Modal Flavor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dario Della Monica, Giovanni Pagliarini, Guido Sciavicco, and Ionel Eduard Stan Assisted Process Knowledge Graph Building Using Pre-trained Language Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Patrizio Bellan, Mauro Dragoni, and Chiara Ghidini Neural Networks Reduction via Lumping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dalila Ressi, Riccardo Romanello, Carla Piazza, and Sabina Rossi Knowledge Enhanced Neural Networks for Relational Domains . . . . . . . . . . . . . . Alessandro Daniele and Luciano Serafini Logic Tensor Networks for Top-N Recommendation . . . . . . . . . . . . . . . . . . . . . . . . 110 Tommaso Carraro, Alessandro Daniele, Fabio Aiolli, and Luciano Serafini Multiagent Systems A Review of the Muddy Children Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Yusuf Izmirlioglu, Loc Pham, Tran Cao Son, and Enrico Pontelli Multi-agent Cooperative Argumentation in Arg2P . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Giuseppe Pisano, Roberta Calegari, and Andrea Omicini Ethics by Design for Intelligent and Sustainable Adaptive Systems . . . . . . . . . . . 154 Luca Squadrone, Danilo Croce, and Roberto Basili Automated Planning and Scheduling Verification of Numeric Planning Problems Through Domain Dynamic Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Enrico Scala, Thomas L. McCluskey, and Mauro Vallati Comparing Multi-Agent Path Finding Algorithms in a Real Industrial Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Enrico Saccon, Luigi Palopoli, and Marco Roveri Logic-Based Ethical Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Umberto Grandi, Emiliano Lorini, Timothy Parker, and Rachid Alami A Hybrid Recommender System with Implicit Feedbacks in Fashion Retail . . . . 212 Ilaria Cestari, Luigi Portinale, and Pier Luigi Riva Incremental Timeline-Based Planning for Efficient Plan Execution and Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Riccardo De Benedictis, Gloria Beraldo, Amedeo Cesta, and Gabriella Cortellessa Knowledge Acquisition and Completion for Long-Term Human-Robot Interactions Using Knowledge Graph Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Ermanno Bartoli, Francesco Argenziano, Vincenzo Suriani, and Daniele Nardi Construct, Merge, Solve and Adapt Applied to a Bus Driver Scheduling Problem with Complex Break Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Roberto Maria Rosati, Lucas Kletzander, Christian Blum, Nysret Musliu, and Andrea Schaerf Topic Modelling and Frame Identification for Political Arguments . . . . . . . . . . . . 268 Shohreh Haddadan, Elena Cabrio, Axel J. Soto, and Serena Villata Substitute Plastic Film with Kraft Paper in Automatic Pallet Wrapping: An AI Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Eleonora Iotti, Alessandro Dal Palù, Gianluca Contesso, and Francesco Bertinelli AI Applications Transformer Based Motion In-Betweening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Pavithra Sridhar, V. Aananth, Madhav Aggarwal, and R. Leela Velusamy A Logic-Based Tool for Dynamic Generation and Classification of Musical Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Antonio Lieto, Gian Luca Pozzato, Alberto Valese, and Mattia Zito Why Can Neural Networks Recognize Us by Our Finger Movements? . . . . . . . . 327 Elena Mariolina Galdi, Marco Alberti, Alessandro D’Ausilio, and Alice Tomassini Miscellany Labelled Sequent Calculi for Conditional Logics: Conditional Excluded Middle and Conditional Modus Ponens Finally Together . . . . . . . . . . . . . . . . . . . . 345 Nicola Olivetti, Nikola Panic, and Gian Luca Pozzato Deep Learning for ECoG Brain-Computer Interface: End-to-End vs. Hand-Crafted Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 ´ Maciej Sliwowski, Matthieu Martin, Antoine Souloumiac, Pierre Blanchart, and Tetiana Aksenova Quantum Circuit Compilation for the Graph Coloring Problem . . . . . . . . . . . . . . . 374 Angelo Oddi, Riccardo Rasconi, Marco Baioletti, Vieri Giuliano Santucci, and Hamish Beck Toward a Heterogeneous Multi-robot Framework for Priority-Based Sanitization of Railway Stations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Riccardo Caccavale, Mirko Ermini, Eugenio Fedeli, Alberto Finzi, Vincenzo Lippiello, and Fabrizio Tavano Simulated Annealing for the Home Healthcare Routing and Scheduling Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Sara Ceschia, Luca Di Gaspero, and Andrea Schaerf MAP Inference in Probabilistic Answer Set Programs . . . . . . . . . . . . . . . . . . . . . . 413 Damiano Azzolini, Elena Bellodi, and Fabrizio Riguzzi Verifying a Stochastic Model for the Spread of a SARS-CoV-2-Like Infection: Opportunities and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Marco Roveri, Franc Ivankovic, Luigi Palopoli, and Daniele Fontanelli Natural Language Processing DelBERTo: A Deep Lightweight Transformer for Sentiment Analysis . . . . . . . . . 443 Luca Molinaro, Rosalia Tatano, Enrico Busto, Attilio Fiandrotti, Valerio Basile, and Viviana Patti A BERT-Based Scoring System for Workplace Safety Courses in Italian . . . . . . . 457 Nicola Arici, Alfonso E. Gerevini, Luca Putelli, Ivan Serina, and Luca Sigalini Embedding Contextual Information in Seq2seq Models for Grounded Semantic Role Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Claudiu Daniel Hromei, Lorenzo Cristofori, Danilo Croce, and Roberto Basili Keynote talk Adventures with Datalog: Walking the Thin Line Between Theory and Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Georg Gottlob Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 Hybrid Approaches The PSyKE Technology for Trustworthy Artificial Intelligence Roberta Calegari1 and Federico Sabbatini2(B) Alma AI – Alma Mater Research Institute for Human-Centered Artificial Intelligence, Alma Mater Studiorum—Universit` a di Bologna, Bologna, Italy [emailprotected] 2 Department of Pure and Applied Sciences (DiSPeA), University of Urbino, Via S. Chiara, 27, 61029 Urbino, Italy [emailprotected] Abstract. Transparency is one of the “Ethical Principles in the Context of AI Systems” as described in the Ethics Guidelines for Trustworthy Artificial Intelligence (TAI). It is closely linked to four other principles – respect for human autonomy, prevention of harm, traceability and explainability – and involves numerous ways in which opaqueness can have undesirable impacts, such as discrimination, inequality, segregation, marginalisation, and manipulation. The opaqueness of many AI tools and the inability to understand the underpinning black boxes contradicts these principles as well as prevents people from fully trusting them. In this paper we discuss the PSyKE technology, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. The extracted knowledge results are easily injectable into existing AI assets making them meet the transparency TAI requirement. Keywords: Trustworthy Artificial Intelligence · Transparency Explainability · Symbolic knowledge extraction · PSyKE The innovative potential of Artificial Intelligence (AI) is clear, but AI tools can reflect, amplify, and even create untrustworthy behaviours, beliefs, decisions or results [15]. As we use AI systems to formalise, scale, and accelerate processes, we have the opportunity, as well as the duty, to revise and enhance the existing processes, avoiding perpetuating existing patterns of untrustworthiness, by detecting, diagnosing, and repairing them. To trust these systems, domain experts and stakeholders need to trust the decisions made by them. Europe’s This work has been partially supported by the EU ICT-48 2020 project TAILOR (No. 952215) and by the European Union’s Horizon 2020 research and innovation programme under G.A. no. 101017142 (StairwAI project). c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 3–16, 2023. https://doi.org/10.1007/978-3-031-27181-6_1 R. Calegari and F. Sabbatini strategy aims to create an AI Ecosystem of Excellence and Trust where ethical and legal principles are pursued in all AI systems. Transparency is one of the “Ethical Principles in the Context of AI Systems” as described in the Ethics Guidelines for Trustworthy Artificial Intelligence (EGTAI) [9] and in the first AI regulation (the “AI Act”) [8]. It is closely linked to four other principles (respect for human autonomy, prevention of harm, traceability and explainability) and involves numerous ways in which opaqueness can have undesirable impacts, such as discrimination, inequality, exclusion, segregation, marginalisation, exploitation, and manipulation. However, the translation of ethical principles and EGTAI into practical requirements are needed to boost high quality AI innovation in Europe. Concrete methods to ensure that AI systems adhere to the transparency requirement can be borrowed from the explainability domain, since providing explanations concurs to achieve transparency. Different strategies can be exploited to meet transparency and explainability [11]. For instance, it is possible to obtain explainable data-driven solutions only by using interpretable algorithms [16]—such as decision lists, decision trees and sparse integer linear models, and algorithms based on discrete optimisation. However, this kind of technique often has repercussions on the final predictive performance, since most effective algorithms – like artificial neural networks – are not taken into account. Deriving post-hoc explanations [14] is an alternative strategy aimed at reverse-engineering the black-box (BB) inner behaviour to make it explicit. This is a way of combining the performance of prediction-effective (even if opaque) machine learning models with human-interpretable output predictions. Symbolic knowledge extraction (SKE) represents one of the most promising techniques to derive post-hoc explanations from sub-symbolic BB models and interpret the notion of explainability under the transparency perspective, i.e. proposing a transparent model adhering to the not transparent predictor. Its main idea is to build a symbolic – and thus interpretable – model that mimics the behaviour of the original BB, intended as the capability to provide outputs that are as close as possible w.r.t. those of the underlying BB queried on the same inputs. Symbols may consist of comprehensible knowledge—e.g., lists or trees of rules that can be exploited to either derive predictions or to better understand the BB behaviour and, as a further step, as knowledge on which to perform any kind of logical reasoning. Currently, SKE techniques have been already applied in a wide variety of areas, ranging from medical diagnosis [10] to finance [1] and astrophysics [22]. Despite the wide adoption of SKE and the existence of different techniques for extracting symbolic knowledge out of a BB, a unified and generalpurpose software technology supporting such methods and their comparison is currently lacking. In other words, the burden of implementing SKE algorithms, or selecting the best one from the state of the art, is currently on AI stakeholders alone, who are likely to realise custom solutions for a specific application need. Other than slowing down the adoption of SKE as an effective method for reaching transparency, such a lack of viable technologies is somewhat anachronistic in the data-driven AI era, where a plethora of libraries and frameworks The PSyKE Technology for TAI are flourishing, targeting all major programming paradigms and platforms, and making state-of-the-art machine learning (ML) algorithms easily accessible to the general public—cf. SciKit-Learn1 for Python. Accordingly, in this paper we present a general-purpose Platform for Symbolic Knowledge Extraction – PSyKE – as a way to practicalise the TAI requirement – transparency in particular – from high-level principles to concrete methods. Moreover, one of the PSyKE goals is filling the gap between the current state of the art of SKE and the available technology as well as providing a concrete toolkit for testing, evaluating and reaching transparency in AI applications. It provides a controlled experimentation environment for transparency via SKE methods enabling the creation of different simulations/experiments for the specific application at hand. The framework comes as a toolkit in which experiments on transparency can be built and run, comparing different solutions, and selecting the best option. More precisely, PSyKE is conceived as an open library where different sorts of knowledge extraction algorithms can be realised, exploited, or compared. PSyKE supports rule extraction from both classifiers and regressors, and makes the extraction procedure as transparent as possible w.r.t. the underlying BB, depending on the particular extraction procedure at hand. The extraction of first-order logic clauses is also supported, with the twofold advantage of providing human- and machine-interpretable rules as output. These can then be used as either an explanation for the original BB or as a starting point for further symbolic computations and reasonings. The PSyKE framework PSyKE2 [18,19] is a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. 2.1 Functionalities and Main Components PSyKE comes as a software library providing general-purpose support to the extraction of logic rules out of BB predictors by letting users choose the most adequate SKE method for the task and data at hand. A unified API covering virtually all extraction algorithms targeting supervised learning tasks is exposed by the framework and experiments can also be run via a GUI. Currently, PSyKE grants access to state-of-the-art SKE algorithms providing the implementations of several interoperable, interchangeable, and comparable extraction SKE methods [2,6,7,13,17,20]. PSyKE is conceived as an open-ended project, exploitable to design and implement new extraction procedures behind a unique API. Essentially, PSyKE is designed around the notion of extractor, whose overall design is depicted in Fig. 1. Within the scope of PSyKE, an extractor is any algorithm accepting a machine learning predictor as input (classifier or regressor), and producing a theory of logic rules as output. 1 2 https://scikit-learn.org/stable. https://apice.unibo.it/xwiki/bin/view/PSyKE/. R. Calegari and F. Sabbatini Fig. 1. PSyKE design PSyKE extractors require additional information to complete the extraction task. Such information consists of the data set used to train the predictor and its schema. Data sets are required to allow the extraction procedure to inspect the BB behaviour – and therefore build the corresponding output rules – whereas schemas are required to allow (i) the extraction procedure to take decisions based on feature types, and (ii) the extracted knowledge to be more interpretable by referring to the feature names. Accordingly, extractors expect also the data set and its schema metadata as input. Figure 1 shows also the discretiser and scaler components. The former aims at providing some facilities for discretising (binarising) data sets including continuous (categorical) data. This is a procedure often needed for data sets involving these kinds of attributes to be given as input to extractors only accepting discrete or binary input features. 2.2 Architecture and API As depicted in Fig. 2, a key role in the design of PSyKE is played by the Extractor interface, defining the general contract of any knowledge-extraction procedure. Each Extractor encapsulates a single machine learning Predictor and a particular Discretisation strategy. Given a set of inputs, an extractor is capable of extracting a Theory of logic Rules out of a DataFrame, containing the examples the Predictor has been trained upon. PSyKE assumes underlying libraries to be available on the runtime adopted for implementation, from which AI facilities can be inherited. These include: a machine learning library, exposing ad-hoc types aimed at representing data sets, data schemas, or predictors, and a symbolic AI library, exposing ad-hoc types for representing and manipulating logic theories, clauses, and rules. PSyKE inherits high-level abstractions from these libraries. These include the following components: The PSyKE Technology for TAI Fig. 2. PSyKE’s Extractor interface DataFrame — a container of tabular data, where rows commonly denote instances, and columns denote their features, while bulk operations are available to manipulate the table as a whole, as well as any row/column of its; Predictor — a computational entity which can be trained (a.k.a. fitted) against a DataFrame and used to draw predictions of type R; Classifier — a particular case of predictor where R represents a type having a finite amount of admissible values; Regressor — a particular case of predictor where R represents a type having a potentially infinite (possibly continuous) amount of admissible values; Rule — a semantic, intelligible representation of the function mapping Predictor’s inputs into the corresponding outputs, for a portion of the input space; Theory — an ordered collection of rules. For example, PSyKE borrows ML-related abstractions – such as DataFrame, Predictor, or Classifier – from either Pandas or Scikit-Learn Python libraries. Similarly, it borrows high-level symbolic-AI-related abstractions – such as Theory or Rule – from 2P-Kt3 [5]. PSyKE constructs its notion of Extractor upon these inherited concepts— thus designing an Extractor as any method capable of extracting logic Rules out of some trained Predictor. PSyKE extractors are bound to the particular underpinning black-box Predictor, as well as to the Discretisation strategy exploited for the input space. Extractors also expose a method for extracting an explainable Theory from the Predictor – namely, extract – and a method to draw predictions by using the extracted rules—namely, predict. Any attempt to use the extracted rules to draw explainable predictions triggers extraction first—i.e., the prediction procedure implies extraction. Both extraction and prediction rely on a DataFrame that must be provided by the user upon invocation. Extractors, in the general case, may also be used to perform rule induction from data, without any intermediate predictor. 3 R. Calegari and F. Sabbatini It is worth noting that Predictors are parametric types. The meta-parameter R represents the type of predictions the predictor may produce. The rules possibly extracted by such predictors – as well as the predictions extracted – may differ significantly depending on the particular data and on the selected predictors. For instance, when rules are extracted from mono-dimensional regressors, R may be the type of floating point numbers, whereas, for multi-class classifiers, R may consist of the set of types (like integer, string, ...). Depending on the nature of R, the extracted rules possibly differ significantly. However, the proposed API makes it possible to switch between different extraction algorithms and predictors with no changes in the PSyKE architecture. Output rules produced by PSyKE’s extractors may be more tailored on human-interpretability or agent-/machine-interoperability [21]. In the former case, a Prolog theory of logic clauses is provided as output. In the latter case, the knowledge is extracted as an OWL ontology containing SWRL rules. In this section some examples showing PSyKE working in different scenarios are reported—i.e. the Iris data set4 as a classification task and the Combined Cycle Power Plant5 (CCPP) data set as a regression case study. 3.1 Classification: The Iris Data Set In the following we report the outcome of PSyKE when applying different SKE techniques to the Iris data set. All the results are resumed in Fig. 3 and Table 1. Column “Predictor” represents the ML step of the process. Column “Extractor” represents the output of PSyKE. Different extraction procedures – namely, Iter, GridEx, and Cart – are applied to some selected BB classifiers. These predictors are a k-nearest neighbor with k = 5 (5-NN), a decision tree (DT) and a multilayer perceptron (MLP). A numerical assessment of the aforementioned predictors and extractors is reported in Table 1 in terms of number of extracted rules and predictive performance w.r.t. data and BB predictions. The predictive performance is expressed through both classification accuracy and F1 score metrics. Values are averaged upon 25 executions, each one with different random train/test splits, but the same test set percentage and same parameters for predictors and extractors. Table 1 also reports the underpinning BB predictor accuracy and the fidelity and accuracy of the extraction procedure. It is worth noting that different SKE techniques can be easily compared and the best option for the scenario at hand can be selected thanks to the controlled experimentation environment provided by PSyKE. https://archive.ics.uci.edu/ml/datasets/iris. https://archive.ics.uci.edu/ml/datasets/combined+cycle+power+plant. The PSyKE Technology for TAI Fig. 3. Comparison between Iris data set input space partitionings performed by the algorithms implemented in PSyKE. Only the two most relevant features are reported— i.e., petal width and length. Regression: The Combined Cycle Power Plant Data Set In this example, PSyKE is exploited to extract rules out of different BB regressors trained upon the CCPP data set. The data set contains 9568 instances, each one composed of 4 real-valued input attributes. Diverse regressors are trained on the CCPP data set: a 3-NN, a DT and a linear regressor (LR). Same as the previous example, PSyKE is used to extract logic rules out of the selected BB models exploring some of the SKE methods it supports—namely, Iter, GridEx, GridREx and Cart. Metrics for measuring the fidelity of the extractor w.r.t. the underlying BB predictions as well as the predictive accuracy w.r.t. the data are the mean absolute error (MAE) and R2 score. The same metrics are used to assess the predictive performance of the BBs R. Calegari and F. Sabbatini Table 1. Comparison between predictive performance and fidelity measurements applied to the Iris data set. The best extractors are highlighted. Predictor Type Accuracy F1 Algorithm Rules Accuracy F1 score score (data) (BB) (data) (BB) 5-NN 0.96 Iter GridEx Cart 0.91 0.94 0.92 0.93 0.91 0.96 0.94 0.93 0.92 0.93 0.96 0.93 Iter GridEx Cart 0.96 0.94 0.89 0.94 0.96 0.96 0.94 0.93 0.89 0.94 0.96 0.93 MLP 0.99 Iter GridEx Cart 0.80 0.94 0.95 0.79 0.78 0.96 0.94 0.93 0.95 0.76 0.96 0.93 and as for the Iris case study the extracted knowledge readability is expressed as number of rules. The results of PSyKE applied to the CCPP data set are summarised in Fig. 4 and Table 2. Each one of the extraction procedures suitable for regression tasks is applied to all the aforementioned BB regressors. Figure 4 shows that all the extractors are able to capture the behaviour of the output values w.r.t. the input variables. Table 2 reports the predictive performance of predictors and extractors. Values are averaged upon 25 executions, each one with different train/test splits, but with the same parameters for both predictors and extractors. Results show that in the case at hand all predictors have comparable performance in terms of MAE and R2 score. Conversely, it is possible to notice that Cart, GridEx and GridREx always appear more explainable than Iter in terms of the number of extracted rules. From the Table it may be easily noticed also that GridEx and Cart generally present analogous performance. This fact depends on the nature of the corresponding output rules. Indeed, they both produce rules having constant output values, introducing an undesired discretisation of the predicted variable. Both of them are able to outperform Iter also in terms of predictive performance (smaller MAE and larger R2 score). On the other hand, GridREx outperforms all the other algorithms, achieving higher fidelity and readability. This depends on the regressive nature of its outputs, enabling the creation of more concise output rules performing more accurate predictions. Indeed, GridREx rules have as postconditions linear combinations of the input variables. The nature of the different predictors and extractors used in this case study may be easily noticed in Fig. 4. The boundaries identified by the 3-NN clearly follow a proximity pattern. Conversely, the DT performs variable slicing along The PSyKE Technology for TAI Fig. 4. Comparison between CCPP data set output predictions provided by the algorithms implemented in PSyKE. Only the two most relevant features are reported—i.e., ambient temperature and exhaust each input dimension and the LR produces a gradual output value decrement for growing input values. As for the extractors, for Cart the same considerations made for the DT hold. The hypercubic nature of Iter and GridEx is detectable by observing the rectangular boundaries provided by them. Finally, GridREx provides local linear regressive laws for hypercubic regions, merging the advantages of both DTs and LRs. R. Calegari and F. Sabbatini Table 2. Comparison between predictive performance and fidelity measurements applied to the CCPP data set. The number of extracted rules is also reported. The best extractors are highlighted. Type MAE R2 Algorithm Rules MAE R2 score score (data) (BB) (data) (BB) 3-NN 3.09 Iter 22 GridEx 5 GridREx 5 Cart 6 4.19 5.02 3.25 4.45 3.78 4.63 2.52 3.90 0.94 0.87 0.94 0.89 0.96 0.88 0.96 0.91 Iter 14 GridEx 5 GridREx 5 Cart 6 4.27 5.02 3.24 4.46 4.32 5.10 3.38 4.50 0.93 0.87 0.94 0.89 0.92 0.86 0.93 0.88 Iter 43 GridEx 5 GridREx 1 Cart 6 4.42 5.15 3.59 4.97 2.74 3.80 0.00 3.49 0.93 0.86 0.93 0.87 1.00 0.92 1.00 0.93 Once again it is worth noting how PSyKE technology enables different SKE techniques to be compared. Such a comparison provide also a measure in terms of explainability and transparency that can be achieved out of the BB predictor. 3.3 PSyKE GUI Figure 5 shows an example of PSyKE GUI screenshot in order to highlight how the toolkit also enables achieving fast and easy interactions with users. The GUI is simple and user-friendly, divided into 4 panels. The top panel is dedicated to the task selection (classification vs. regression) and to data set selection/preprocessing. Users can choose between several pre-defined data sets, as well as load a custom file. Furthermore, they can choose to discretise/scale the features and, on the right, it is possible to select among all the available features (i) the one to be used as output; (ii) those to be used as inputs; and (iii) those to be neglected. On the same panel it is possible to select two input features to be plotted together with the output feature. Plots appear in the right-most central panel of the GUI. The first one represents the data set instances, the second depicts the decision boundaries of the trained BB predictor and the third does the same for the selected extractor. Plots are shown after the proper button pressing, but each plot depends on the previous operations performed by the users. The predictor plot requires a BB predictor to be previously chosen and trained. This can be done by acting on the left-most central panel of the interface. Several models are available, each one with corresponding text boxes The PSyKE Technology for TAI Fig. 5. PSyKE GUI to allow users to customise the required hyper-parameters. Users can also choose the train-test splitting percentage. Each parameter has a default value, so user inputs are optional. Analogously, the bottom-most panel is dedicated to the selection, training and tuning of knowledge extractors. Training an extractor enables the visualisation of the third plot. The knowledge extracted with PSyKE extractors is displayed below the plots, in Prolog syntax. Finally, information about the chosen data set (e.g., number of features, classes and instances), predictor (e.g., parameters and predictive performance) and extractor (e.g., parameters, predictive performance and fidelity measurements) are shown next to the corresponding selection commands (after their selection). The example reported in Fig. 5 shows the application of PSyKE to the Iris data set. The data set has been loaded without discretisation and feature pruning, then a 5-NN has been trained on 80% of the data set. The Cart extractor has finally been chosen, with maximum depth and maximum leaf amount equal to 3. Only input features about petal width and length have been selected to be plotted. In conclusion, the framework provides the possibility to build different experiments in a controlled environment, enabling easy exploitation of the technology and offering the possibility to compare the results in a simple way. The PSyKE technology may impact many research areas. It provides a wellgrounded technological basis and a software engineering practice for implementing/experimenting with the transparency and explainability dimensions in AI R. Calegari and F. Sabbatini applications. It provides an extensible framework for collecting the SKE methods and approaches proposed in the literature, creating a controlled environment for testing, evaluating and comparing transparency. PSyKE has an important role from the point of view of software engineering, providing a methodology that can be exploited for grounding all the TAI dimensions—i.e., the design and the implementation of a controlled experimentation environment that can act also as a sandbox for simulating the trustworthiness of an AI system. Accordingly, the framework provides a concrete example of the feasibility of building a practical toolkit for AI stakeholders to test the dimensions of TAI. Moreover, PSyKE has a role to play in the field of XAI [12]. Integrating symbolic and sub-symbolic AI – i.e., using them in synergy, as an ensemble – is a strategical research direction [4], and PSyKE offers a sound technological foundation for this purpose. Finally, the distributed systems community has the need for interoperable and general-purpose logic-based technologies that can be easy injectable into already existing systems [3]. There, PSyKE provides a technological layer easy injectable in distributed systems supporting agents’ reasoning via the production of logical knowledge that can be exploited by agents. Given all the potential of the described framework, there is room for several future research directions. PSyKE already enables the investigation of relevant research questions involving symbolic manipulation or automated reasoning, thanks to its modularity and interoperability. Under such a perspective, PSyKE enables exploring how to: (i) blend SKE with other AI techniques, and (ii) exploit SKE to build flexible intelligent systems. Along these lines, future research directions will take into account the integration in the framework of a larger suite of methods for dealing with the most variety of datasets and predictors. Some preliminary experiments showed that the SKE algorithms can be exploited also for rule induction starting from data. This line is particularly interesting for all the cases in which a BB predictor is not available. Moreover, new SKE techniques are under development exploiting the combination of SKE with explainable clustering techniques increasing both performance and fidelity. Finally, the framework is a preliminary example of how TAI dimensions can be tested and evaluated, and an interesting research line is to extend the environment in order to achieve a certification of the level of transparency – or more in general trustworthiness – for given AI applications. The challenge here is to find a way for defining effective metrics for the certification of TAI dimensions. In this paper we discuss the PSyKE technology, a platform providing generalpurpose support to symbolic knowledge extraction from different sorts of blackbox predictors via many extraction algorithms, to be easily injectable into existing AI assets making them meet the transparency TAI requirement. The framework provides a controlled experimentation environment in which transparency and explainability can be tested, assessed and compared. The PSyKE Technology for TAI Even if still in a preliminary stage, it provides a software engineering practice for grounding all the TAI dimensions, translating them from high-level principles to practical requirements. References 1. Baesens, B., Setiono, R., De Lille, V., Viaene, S., Vanthienen, J.: Building creditrisk evaluation expert systems using neural network rule extraction and decision tables. In: Storey, V.C., Sarkar, S., DeGross, J.I. (eds.) ICIS 2001 Proceedings, pp. 159–168. Association for Information Systems (2001). http://aisel.aisnet.org/ icis2001/20 2. Breiman, L., Friedman, J., Stone, C.J., Olshen, R.A.: Classification and Regression Trees. CRC Press, Boca Raton (1984) 3. Calegari, R., Ciatto, G., Mascardi, V., Omicini, A.: Logic-based technologies for multi-agent systems: a systematic literature review. Auton. Agents MultiAgent Syst. 35(1), 1:1–1:67 (2021). https://doi.org/10.1007/s10458-020-09478-3, http://link.springer.com/10.1007/s10458-020-09478-3. collection Current Trends in Research on Software Agents and Agent-Based Software Development 4. Calegari, R., Ciatto, G., Omicini, A.: On the integration of symbolic and subsymbolic techniques for XAI: a survey. Intell. Artif. 14(1), 7–32 (2020). https:// doi.org/10.3233/IA-190036 5. Ciatto, G., Calegari, R., Omicini, A.: 2P-Kt: a logic-based ecosystem for symbolic AI. SoftwareX 16(100817), 1–7 (2021). https://doi.org/ 10.1016/j.softx.2021. 100817, https://www.sciencedirect.com/science/article/pii/S2352711021001126 6. Craven, M.W., Shavlik, J.W.: Using sampling and queries to extract rules from trained neural networks. In: Machine Learning Proceedings 1994, pp. 37–45. Elsevier (1994). https://doi.org/10.1016/B978-1-55860-335-6.50013-1 7. Craven, M.W., Shavlik, J.W.: Extracting tree-structured representations of trained networks. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems 8. Proceedings of the 1995 Conference, pp. 24–30. The MIT Press, June 1996. http://papers.nips.cc/paper/1152-extractingtree-structured-representations-of-trained-networks.pdf 8. European Commission: AI Act - Proposal for a regulation of the european parliament and the council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts (2021). https://eur-lex.europa.eu/ legal-content/EN/TXT/?uri=CELEX:52021PC0206 9. European Commission, Directorate-General for Communications Networks, C., Technology: Ethics guidelines for trustworthy AI. Publications Office (2019). https://doi.org/10.2759/346720 10. Franco, L., Subirats, J.L., Molina, I., Alba, E., Jerez, J.M.: Early breast cancer prognosis prediction and rule extraction using a new constructive neural network algorithm. In: Sandoval, F., Prieto, A., Cabestany, J., Gra˜ na, M. (eds.) IWANN 2007. LNCS, vol. 4507, pp. 1004–1011. Springer, Heidelberg (2007). https://doi. org/10.1007/978-3-540-73007-1 121 11. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018). https://doi.org/10.1145/ 3236009 12. Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019) R. Calegari and F. Sabbatini 13. Huysmans, J., Baesens, B., Vanthienen, J.: ITER: an algorithm for predictive regression rule extraction. In: Tjoa, A.M., Trujillo, J. (eds.) DaWaK 2006. LNCS, vol. 4081, pp. 270–279. Springer, Heidelberg (2006). https://doi.org/10.1007/ 11823728 26 14. Kenny, E.M., Ford, C., Quinn, M., Keane, M.T.: Explaining black-box classifiers using post-hoc explanations-by-example: the effect of explanations and error-rates in XAI user studies. Artif. Intell. 294, 103459 (2021). https://doi.org/10.1016/j. artint.2021.103459 15. M¨ okander, J., Morley, J., Taddeo, M., Floridi, L.: Ethics-based auditing of automated decision-making systems: nature, scope, and limitations. Sci. Eng. Ethics 27(4), 1–30 (2021) 16. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x 17. Sabbatini, F., Calegari, R.: Symbolic knowledge extraction from opaque machine learning predictors: GridREx & PEDRO. In: Kern-Isberner, G., Lakemeyer, G., Meyer, T. (eds.) Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning, July 31–5 August 2022, KR 2022, Haifa, Israel (2022). https://proceedings.kr.org/2022/57/ 18. Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: On the design of PSyKE: a platform for symbolic knowledge extraction. In: Calegari, R., Ciatto, G., Denti, E., Omicini, A., Sartor, G. (eds.) WOA 2021–22nd Workshop From Objects to Agents. CEUR Workshop Proceedings, vol. 2963, pp. 29–48. Sun SITE Central Europe, RWTH Aachen University (Oct 2021), 22nd Workshop From Objects to Agents (WOA 2021), Bologna, Italy, 1–3 September 2021. Proceedings (2021) 19. Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: Symbolic knowledge extraction from opaque ML predictors in PSyKE: Platform design & experiments. Intell. Artif. 16(1), 27–48 (2022). https:// doi.org/10.3233/IA-210120 20. Sabbatini, F., Ciatto, G., Omicini, A.: GridEx: an algorithm for knowledge extraction from black-box regressors. In: Calvaresi, D., Najjar, A., Winikoff, M., Fr¨ amling, K. (eds.) EXTRAAMAS 2021. LNCS (LNAI), vol. 12688, pp. 18–38. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82017-6 2 21. Sabbatini, F., Ciatto, G., Omicini, A.: Semantic web-based interoperability for intelligent agents with PSyKE. In: Calvaresi, D., Najjar, A., Winikoff, M., Fr¨ amling, K. (eds.) Proceedings of the 4th International Workshop on Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2022. LNCS, vol. 13283, chap. 8, pp. 124–142. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-031-15565-9 8 22. Sabbatini, F., Grimani, C.: Symbolic knowledge extraction from opaque predictors applied to cosmic-ray data gathered with LISA pathfinder. Aeronaut. Aerosp. Open Access J. 6(3), 90–95 (2022). https://doi.org/10.15406/aaoaj.2022.06.00145 A Declarative Approach to Contrast Pattern Mining Francesca Alessandra Lisi1(B) 1 and Gioacchino Sterlicchio2 Dipartimento di Informatica and CILA, University of Bari “Aldo Moro”, Bari, Italy [emailprotected] 2 Department of Mechanics, Mathematics and Management, Polytechnic University of Bari, Bari, Italy Abstract. This paper proposes a declarative approach to the problem of contrast pattern mining. The approach is based on encodings of the data and the problem with Answer Set Programming (ASP), and evaluated in a novel AI application in the field of Digital Forensics. Keywords: Contrast Pattern Mining Digital Forensics · Answer Set Programming · Pattern mining [12] is a class of data mining tasks that consist of extracting interesting structured patterns from a dataset. These tasks encompass itemset mining, sequence mining and graph mining. The interestingness measure of a pattern is, in most of the algorithms, the number of its occurrences in the dataset. Given a threshold k, interesting patterns are those that occur at least in k data instances. In this case, the task is known as frequent pattern mining for which many algorithms have been proposed. An interesting extension of the frequent pattern mining task is the one that aims at the discovery of so-called contrast patterns. Whereas frequent patterns are statistically significant regularities in a set of transactions, contrast patterns denote statistically significant differences between two or more disjoint sets of transactions [6]. Recently there has been an increasing interest in declarative approaches to pattern mining, thus giving rise to a novel stream of research known under the name of Declarative Pattern Mining (DPM). So far, DPM addressed tasks such as frequent itemset mining [10,13], and sequence mining [7,17]. Different declarative frameworks have been explored: SAT [13], Constraint Programming [5,10], and Answer Set Programming (ASP) [7,11]. In this paper we propose a declarative approach for contrast pattern mining which leverages the expressive and inferential power of ASP. To the best of our knowledge, this interesting class of pattern mining problems has not been addressed yet in DPM. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 17–30, 2023. https://doi.org/10.1007/978-3-031-27181-6_2 F. A. Lisi and G. Sterlicchio Declarative approaches are generally desirable in application domains where the requirements of transparency, verifiability and explainability of the AI techniques employed are of paramount importance. One of these cases is the field of Digital Forensics (DF), a branch of criminalistics that deals with the identification, acquisition, preservation, analysis and presentation of the information content of computer systems, or in general of digital devices, by means of specialized software, and according to specific regulations. A declarative approach to DF was first explored by Costantini et al. [2,3], and subsequently adopted by the COST Action “Digital forensics: evidence analysis via intelligent systems and practices” (DigForASP)1 . The aim of DigForASP is to promote formal and verifiable AI methods and techniques in the analysis of evidence [4]. In this paper, we report the preliminary results obtained by applying the proposed ASP-encoded contrast pattern mining algorithm to a dataset of phone records made available within DigForASP. The paper is organized as follows. In Sect. 2 we provide the necessary preliminaries on contrast pattern mining and ASP. In Sect. 3 we introduce the proposed ASP encoding for contrast pattern mining. In Sect. 4 we describe the application of this encoding to the analysis of phone records, and report the results of some experiments. In Sect. 5 we conclude with final remarks. Contrast Pattern Mining in Brief We assume the set I = {1, ..., m} of m items, and the set T = {1, ..., n} of n transactions. Intuitively, a transaction t ∈ T is a subset of items from I, which is typically associated with a transaction identifier (TID). A transactional database D ∈ {0, 1}n×m can be seen as a binary matrix, in which each row Dt represent the transaction t consisting of the items {i ∈ I|Dt,i = 1}, where Dt,i denote the value on the i -th column and t-th row of D. The subsets of I are called itemsets or patterns. In pattern mining we are interested in finding patterns that satisfy constraints relative to a set of transactions. In particular, given the pattern P ⊆ I, and a set of transactions T, the subset of T covered by P is cover(P, T ) = {t ∈ T |∀i ∈ P : Dt,i = 1}. Then the absolute support of P in T is defined as: supp(P, T ) = |cover(P, T )| (1) and quantifies the number of transactions in T containing the pattern P. Frequent pattern mining algorithms are used to discover statistically significant regularities in a set of transactions whereas the contrast pattern mining task is about detecting statistically significant differences (contrast) between two or more disjoint sets of transactions [6]. To this aim, we assume also a finite set L of class labels which are used by the function L(t) ∈ L to label each transaction t. In our setting, the label α ∈ L partitions T in two samples: 1 A Declarative Approach to Contrast Pattern Mining 1. T (α) = {t ∈ T |L(t) = α}, i.e., the transactions labeled with α; 2. its complement T (α) = T \ T (α). The contrast pattern P with respect to α is quantified by the so-called absolute support difference, which is defined as: dif f (P, α) = supp(P, T (α)) − supp(P, T (α)) The problem of contrast pattern mining concerns the enumeration of all frequent patterns with absolute support difference that exceeds the user-defined minimum support threshold minDiff. More specifically, given: – – – – – the the the the the transaction database D over the set of transactions T ; maximum pattern length threshold maxLength; minimum absolute support threshold minSupp ≥ 0; minimum absolute support difference threshold minDif f ≥ 0; label α ∈ L. the problem of contrast pattern mining is to find all patterns (P, dif f (P, α)) such that: 1. |P | ≤ maxLength; 2. supp(P, T (α)) ≥ minSupp; 3. dif f (P, α) ≥ minDif f . To understand the meaning of contrast patterns, it is important to comment further the formula (2). Given a class α, a pattern P is a contrast pattern for that class if its support differs from the support of the same pattern for the complementary class. If the difference of the support is equal to 0, it means that P is present in the same way in the two classes. Therefore this pattern does not allow to find the differences between the classes. Conversely, the more the difference in support moves away from 0, the more P is to be understood as a pattern that allows to distinguish the two classes under comparison. Therefore, P is a representative pattern for the class α but not for the complementary class. 2.2 Answer Set Programming in a Nutshell In the following we give a brief overview of the syntax and semantics of disjunctive logic programs in ASP. The reader can refer to, e.g., [1] for a more extensive introduction to ASP. Let U be a fixed countable set of (domain) elements, also called constants, upon which a total order ≺ is defined. An atom α is an expression p(t1 , . . . , tn ), where p is a predicate of arity n ≥ 0 and each ti is either a variable or an element from U (i.e., the resulting language is function-free). An atom is ground if it is free of variables. We denote the set of all ground atoms over U by BU . A (disjunctive) rule r is of the form a1 ∨ . . . ∨ an ← b1 , . . . , bk , not bk+1 , . . . , not bm F. A. Lisi and G. Sterlicchio with n ≥ 0, m ≥ k ≥ 0, n + m > 0, where a1 , . . . , an , b1 , . . . , bm are atoms, or a count expression of the form #count{l : l1 , . . . , li } u, where l is an atom and lj is a literal (i.e., an atom which can be negated or not), 1 ≥ j ≥ i, ∈ {≤, , ≥}, and u ∈ N. Moreover, “not” denotes default negation. The head of r is the set head(r) = {a1 , . . . , an } and the body of r is body(r) = {b1 , . . . , bk , notbk+1 , . . . , notbm }. Furthermore, we distinguish between body + (r) = {b1 , . . . , bk } and body − (r) = {bk+1 , . . . , bm }. A rule r is normal if n ≤ 1 and a constraint if n = 0. A rule r is safe if each variable in r occurs in body + (r). A rule r is ground if no variable occurs in r. A fact is a ground rule with body(r) = ∅ and |head(r)| = 1. An (input) database is a set of facts. A program is a finite set of rules. For a program Π and an input database D, we often write Π(D) instead of D ∪ Π. If each rule in a program is normal (resp. ground), we call the program normal (resp. ground). Given a program Π, let UΠ be the set of all constants appearing in Π. Gr(Π) is the set of rules rσ obtained by applying, to each rule r ∈ Π, all possible substitutions σ from the variables in r to elements of UΠ . For countexpressions, {l : l1 , . . . , ln } denotes the set of all ground instantiations of l, governed through l1 , . . . , ln . An interpretation I ⊆ BU satisfies a ground rule r iff head(r)∩I = ∅ whenever body + (r) ⊆ I, body − (r)∩I = ∅, and for each contained count-expression, N u holds, where N = |{l|l1 , . . . , ln }|, u ∈ N and ∈ {≤, < , =, >, ≥}. A ground program Π is satisfied by I, if I satisfies each r ∈ Π. A nonground rule r (resp., a program Π) is satisfied by an interpretation I iff I satisfies all groundings of r (resp., Gr(Π)). A subset-minimal set I ⊆ BU satisfying the Gelfond-Lifschitz reduct Π I = {head(r) ← body + (r)|I ∩ body − (r) = ∅, r ∈ Gr(Π)} is called an answer set of Π. We denote the set of answer sets for a program Π by AS(Π). The tools used in this work are part of the Potassco2 collection [9]. The main tool of the collection is the clingo ASP solver [8]. Mining Contrast Patterns with ASP Within the declarative framework of ASP, the transaction database D is represented by means of facts of the following two kinds: class(t, c), and db(t, f(v)). Here, t is the TID while c represents the class, f represents a feature and v its value. In particular, we introduce the fact db(t,f(v)) if and only if Dt,i = 1. So, there is a db-fact for each feature. In DPM, patterns are represented as answer sets. More precisely, a single pattern is associated with each answer set and in our approach represented by means of the in pattern/1 and absolute diff/1 predicates. The latter expresses the difference in support of the pattern between the class under consideration and the complementary class. Each pattern conveys information that allows to characterize the considered class. A Declarative Approach to Contrast Pattern Mining 1 2 3 4 # const # const # const # const minSupp = 2. maxLength = 3. minDiff = 1. class = positive . % link facts to objects used in the encoding item ( I ) : - db (_ , I ) . transaction ( T ) : - db (T , _ ) . % problem encoding ( frequent itemset mining ) { in_pattern ( I ) } : - item ( I ) . in_support ( T ) : - { conflict_at (T , I ) : item ( I ) } 0 , transaction ( T ) , class (T , class ) . out_support ( T ) : - { conflict_out (T , I ) : item ( I ) } 0 , transaction ( T ) , not class (T , class ) . conflict_at (T , I ) : - not db (T , I ) , in_pattern ( I ) , transaction ( T ) , class (T , class ) . conflict_out (T , I ) : - not db (T , I ) , in_pattern ( I ) , transaction ( T ) , not class (T , class ) . % definition of absolute support difference ( Dong et al .) absolute_diff ( D ) : - N = # count { T : in_support ( T ) } , M = # count { T : out_support ( T ) } , D = |N - M |. % length constraint : - maxLength +1 { in_pattern ( I ) }. : - { in_pattern ( I ) } 0. % frequency constraint : - { in_support ( T ) } minSupp -2. % absolute growth - rate constraint : - absolute_diff ( D ) , D < minDiff . % print directives for an answer - set # show in_pattern /1. # show absolute_diff /1. Listing 1.1. Full ASP encoding for contrast pattern mining. The ASP enconding for the contrast pattern mining problem introduced in Sect. 2.1 is reported in Listing 1.1. The values for minSupp, minDiff and maxLength are encoded as symbolic constants. In Lines 1–4, the chosen constants are for demonstration purposes only. The predicate in pattern/1 (Line 11) is true for an item i if and only if i is included in a pattern P and encoding the most important part of a solution (P, dif f (P, α)). The predicate in support/1 (Line 12) is true for a transaction t if and only if t ∈ T . The intuition is that each t has to support each i ∈ I in the sense that t must include i. Additionally, we use the auxiliary predicates item/1 (Line 7, true for each item in D), F. A. Lisi and G. Sterlicchio transaction/1 (Line 8, true for each transaction in D) and conflict at/2 (Line 14) which is true for (t, i) if and only if t does not support i, that is, we have the conflict Dt,i = 0 and i ∈ I, thus violating the premises. In particular, the predicates in support/1 and conflict at/2 encode the construction of patterns for the class α. Conversely, the predicates out support/1 (Line 13) and conflict out/2 (Line 15) are used to generate patterns for the complementary class. Finally, the definition for the absolute support difference is encoded at Line 18. After the pattern generation step, the encoding applies three constraints corresponding to the thresholds maxLength, minSupp, and minDiff. The first constraint is expressed by Lines 21–22 and rules out patterns having 0 items or more than maxLength items. The second constraint is expressed at Line 25. In fact, patterns supported by at most minSupp-2 instances are not allowed as an answer. The third constraint, encoded at Line 28, discards patterns with absolute support difference lower than minDiff from the answer set. The two #show commands on Lines 31–32 allow, for each answer set, the display of the atoms that compose a solution (P, dif f (P, α)) to problem in hand. The encoding and further material can be found online.3 An Application in Digital Forensics Digital Forensics (DF) is a branch of criminalistics that deals with the identification, acquisition, preservation, analysis and presentation of the information content of computer systems, or in general of digital devices, by means of specialized software, and according to specific regulations. In particular, the phase of Evidence Analysis involves examining and aggregating evidence about possible crimes and crime perpetrators collected from various electronic devices in order to reconstruct events, event sequences and scenarios related to a crime. Results from this phase are then made available to law enforcement, investigators, intelligence agencies, public prosecutors, lawyers and judges. During the investigation of a crime, it is common to analyze the communications of a particular suspect. Since nowadays mobile phones are objects owned by anyone, it can be useful for investigators to analyze the calls or messages exchanged. The telephone records are a set of data relating to the external communications of the devices. In other words, they contain all the traces of communications (calls, SMS, and all the data traffic) concerning a specific user over a certain period of time. Note that phone records do not trace sensitive data such as the audio of calls sent or received. In fact, phone records only provide a trace of the communication that has taken place but not its content. The phone records can be requested by the Judicial Authority if deemed useful in order to carry out investigations involving the individual owner of the phone. Correctly analyzing the telephone records is essential to obtain useful hints. Depending on the analysis, different kinds of information can be extracted. The records are typically analyzed for comparing the geographical positions with 3 A Declarative Approach to Contrast Pattern Mining respect to the declarations, and for reconstructing the network of contacts of a single user in order to trace which conversations (s)he has had with whom, where and when. In this Section we report the preliminary results obtained by applying our ASP encoding for contrast pattern mining to a dataset of phone records. 4.1 The DigForASP Dataset For our experiments we have considered a dataset that consists of the telephone records of four users from a real-world investigative case. The dataset has been made available by Prof. David Billard (University of Applied Sciences in Geneva) under NDA to DigForASP members for academic experimentation. Each file in the dataset has the following schema: – Type: what kind of operation the user has performed (e.g., incoming/outgoing call or SMS); – Caller : who makes the call or sends an SMS; – Callee: who receives the call or SMS; – Street: where the operation has taken place; – Time: when the operation has taken place (ISO format4 HH: MM: SS); – Duration: how long the operation has been (ISO format HH: MM: SS); – Date: when the operation has taken place (format: day, month, year). The type of the operation is one of the following cases: “config”, “gprs”, “redirect”, “out sms(SUB TYPE)”, “in sms(SUB TYPE)”, “out call(SUB TYPE)”, “in call(SUB TYPE)”. Sub-types are: “simple”, “ack”, “foreign”. The dataset has undergone the mandatory anonymization process for reasons of privacy and confidentiality. Therefore it does not contain data that allows tracing back to the real people involved in the investigative case. For instance, there is no phone number for the caller/callee but only a fictitious name. The names and the sizes (# rows) of the four files in the dataset are the following: Eudokia Makrembolitissa (8,783), Karen Cook McNally (20,894), Laila Lalami (12,689), and Lucy Delaney (8,480). 4.2 Preprocessing and ASP Encoding of the Dataset The DigForASP dataset in its original format cannot be considered as a set of transactions in ASP syntax. It needs to undergo a transformation into the format described in Sect. 3. In short, each row of the dataset is encoded as a collection of facts through the class and db predicates. The transformation has been done by means of a Python script. The classes refer to the operation type, namely: “in sms”, “out sms”, “in call”, “out call”, “config”, “redirect”, “gprs”. The features are: caller, callee, street a, street b, time, weekday and duration. The weekday feature does not appear in the original dataset. It has been added with the following values: (0 = Monday, ..., 6 = Sunday). The duration feature has undergone a transformation 4 Format to describe dates and times: https://en.wikipedia.org/wiki/ISO 8601. F. A. Lisi and G. Sterlicchio in order to obtain a value expressed in seconds. The time feature has been discretized into four time slots: “morning” (from 06:00:00 to 11:59:59), “afternoon” (from 12:00:00 to 17:59:59), “evening” (from 18:00:00 to 23:59:59), and “night” (from 00:00:00 to 05:59:59). Depending on the analyst’s needs, it is possible to consider (and encode) only the transactions related to specific days, months or years so as to subsequently carry out a more granular analysis. The transactions are sorted by date and time, as shown in Table 1. Table 1. ASP encoding of some transactions from Karen’s phone recordings from the morning of 07/09/2040 to the night of 08/09/2040. For the experiments here presented we have run the ASP encoding reported in Listing 1.1 over the largest file from the DigForASP dataset, namely Karen’s phone records, made up of more than 20,000 rows. As regards the ASP solver, we have used the version 5.4.0 of clingo, with default solving parameters. The hardware and software platform used was a laptop computer with Windows 10 (with Ubuntu 20.04.4 subsystem), AMD Ryzen 5 3500U @ 2.10 GHz, 8 GB RAM without using the multi-threading mode of clingo. Multi-threading reduces the mean runtime but introduces variance due to the random allocation of tasks. Such variance is inconvenient for interpreting results with repeated executions. Exploratory Tests. During an investigation it is useful to understand what kind of information the extracted patterns can offer, in order to guide and support law enforcement in deciding the next steps to take during the investigation. In Listing 1.2, as an illustrative example of the potential usefulness of contrast pattern mining in the DF field, we report the results obtained on Karen’s phone records for the class “out call”. Here, we have set the minimum support threshold A Declarative Approach to Contrast Pattern Mining to 10% and the maximum pattern length to 3. Overall, the nine contrast patterns returned by the algorithm provide a rich information about the habits of Karen as regards outgoing calls in contrast to other types of communication. Notably, they tell us that outgoing calls of Karen are mainly done in the morning (Line 8) or in the afternoon (Line 6). In particular, the answer at Line 4 highlights that outgoing calls are made mainly on Fridays. 1 2 3 in_pattern ( caller ( karen_cook_mcnally ) ) absolute_diff (430) in_pattern ( time ( evening ) ) absolute_diff (24) in_pattern ( caller ( karen_cook_mcnally ) ) in_pattern ( time ( evening ) ) absolute_diff (129) in_pattern ( weekday (4) ) absolute_diff (14) in_pattern ( weekday (4) ) in_pattern ( caller ( karen_cook_mcnally ) ) absolute_diff (126) in_pattern ( time ( afternoon ) ) absolute_diff (34) in_pattern ( caller ( karen_cook_mcnally ) ) in_pattern ( time ( afternoon ) ) absolute_diff (202) in_pattern ( time ( morning ) ) absolute_diff (37) in_pattern ( time ( morning ) ) in_pattern ( caller ( karen_cook_mcnally ) ) absolute_diff (103) Listing 1.2. Contrast patterns for the “out call” class. Scalability Tests. With scalability tests, the goal is to assess the performance of the ASP encoding on datasets of increasing size. Once again, we have considered the file of Karen’s phone records, from we have extracted 100, 1000 and 10,000 rows for the three groups of experiments. In each group, the experiments have been conducted by varying the class for the contrast and the minimum support threshold while keeping the maximum patterns length fixed to 3. The first group of experiments considers the subset consisting of 100 rows. Observing Table 2, the class with the greatest contrast patterns concerns the “out call” operation. With this order of magnitude, the extraction times of the patterns are less than one second for all classes. In general, the memory used for this operation is at most 25 MB. The second group of experiments considers a subset consisting of 1,000 rows. From Table 3, we observe that the class with the greatest number of contrast patterns is again “out call”. It is worthwhile to note that, with an increase in the order of magnitude from hundreds to thousands, the execution time fluctuates in a range between 5 and 10 s with a minimum percentage variation equal to 400% (Fig. 1 B). The memory consumed in this case is much higher than the previous batch of experiments since it jumps to a minimum of more than 300 MB, and a maximum that is around 460 MB (Fig. 1 C). The third group of experiments considers a subset consisting of 10,000 rows. Unlike the previous two groups, this group did not produce results because the amount of resources to be allocated to the RAM memory was so high F. A. Lisi and G. Sterlicchio Table 2. Number of patterns, execution time (seconds), solver time (seconds) and memory consumption (MB) for 100 rows from Karen’s phone records. in sms out sms Th. #Pat. Exec. t. Solv. t. Memory Th. #Pat. Exec. t. Solv. t. Memory 10% 20% 30% 40% 50% 10% 20% 30% 40% 50% 0.119 0.087 0.081 0.089 0.085 0.01 0.00 0.00 0.00 0.00 23.67 22.11 22.35 22.11 21.85 0.085 0.076 0.086 0.086 0.086 0.00 0.00 0.00 0.00 0.00 22.31 21.93 21.67 22.31 22.18 in call out call Th. #Pat. Exec. t. Solv. t. Memory Th. #Pat. Exec. t. Solv. t. Memory 10% 20% 30% 40% 50% 10% 20% 30% 40% 50% 0.137 0.118 0.120 0.086 0.084 0.03 0.01 0.01 0.00 0.00 24.22 24.01 24.01 21.98 21.44 0.136 0.122 0.128 0.121 0.117 0.03 0.02 0.01 0.01 0.01 24.23 24.23 24.23 24.23 24.44 (around 8GB) that the clingo process was killed by the operating system. Considering the pattern generation rule at Line 11 of Listing 1.1, the number of item atoms that must be combined to form the in pattern atoms is equal to 2010. Instead, in the case of 100 and 1,000 rows the number of items are respectively 180 and 670. Since the total number of combinations is defined by n n! (3) = Cn,k = k k!(n − k)! and the minimum pattern length k varies from 1 to 3 in our tests, the total number of combinations for the problem in hand is given by the sum of: – groupings of class 1: – groupings of class 2: – groupings of class 3: 2010! 1!(2010−1)! ; 2010! 2!(2010−2)! ; 2010! 3!(2010−3)! . It is clear that the computation required to solve the problem in hand is very heavy for a dataset size of tens of thousands rows or even more. Final Remarks DPM is a promising direction of research in AI. We do not expect DPM to be competitive with dedicated algorithms, but to take advantage of the versatility of declarative frameworks to propose pattern mining tools that could exploit background knowledge during the mining process to extract less but meaningful patterns. Such tools are particularly welcome in application domains where the requirement of transparency is particularly crucial. This motivation is at the A Declarative Approach to Contrast Pattern Mining Table 3. Number of patterns, execution time (sec), solver time (sec) and memory consumption (MB) for 1,000 rows from Karen’s phone records. in sms out sms Th. #Pat. Exec. t. Solv. t. Memory Th. #Pat. Exec. t Solv. t. Memory 10% 20% 30% 40% 50% 10% 20% 30% 40% 50% 5.929 5.178 4.939 4.843 4.980 0.15 0.00 0.00 0.00 0.00 427.18 345.7 345.7 345.7 345.7 4.979 4.761 4.715 4.795 4.733 0.00 0.00 0.00 0.00 0.00 336.36 325.87 336.36 336.36 323.02 in call out call Th. #Pat. Exec. t. Solv. t. Memory Th. #Pat. Exec. t. Solv. t. Memory 10% 20% 30% 40% 50% 10% 20% 30% 40% 50% 7.683 6.834 6.423 4.916 4.978 1.65 0.71 0.36 0.00 0.00 453.93 453.92 454.03 346.46 346.46 10.155 8.591 7.603 6.765 4.945 3.87 2.41 1.40 0.56 0.00 465.07 465.11 464.89 465.08 354.01 basis of a renewed interest of the AI community in declarative approaches. In particular, the expressive power of ASP makes the definition of algorithmic variants of the basic encoding pretty easy, mainly thanks to a clever use of constraints. Also, the availability of efficient ASP solvers encourage the use in applications characterized by combinatorial problems, such as the ones in pattern mining. Contrast Pattern Mining is an interesting class of pattern mining problems. It is somehow halfway between discrimination and characterization of a data set, due to the use of class labels to guide the search for regularities. Nevertheless, to the best of our knowledge, it has not been addressed so far in DPM research. Our declarative approach is therefore a novel contribution to pattern mining which paves the way to new exciting AI applications. In particular, due to the inherent transparency, it appears to be suitable for analysing evidence in the context of DF investigations. As a case study we have considered the analysis of a real-world dataset of anonymised phone recordings. The preliminary results are encouraging, although they highlight some weaknesses. In particular, the combinatorial explosion affects the scalability of the approach. However, when compared to sequential pattern mining on the same dataset [15,16], it is noteworthy that in contrast pattern mining the solver takes much less time. This is partially due to the fact that the labeling of transactions with classes make the search space smaller. For the future we plan to explore several directions of improvement of the work as regards efficiency and scalability. This implies different choices for the encoding, the solver, and the computing platform. Experiments could be, for instance, replicated with other ASP solvers, such as DLV2 [14], that revealed to be scalable on large datasets. Hybrid ASP-approaches to pattern mining such as [18] could be adopted. An empirical evaluation of the approach with a more F. A. Lisi and G. Sterlicchio Fig. 1. Comparison w.r.t. the number of patterns extracted (A), execution time (B) and memory consumption (C) for the “out call” class (Tables 2 and 3). A Declarative Approach to Contrast Pattern Mining performant hardware is also planned. Besides the improvement of the current work, we intend to consider other variants of the contrast pattern mining problem. In parallel to the methodological work, we would like to benefit from a tighter interaction with DF experts in order to get their feedback as regards the validity and the usefulness of our work from DF viewpoint, and their suggestions for new interesting directions of applied research in this field. Acknowledgments. This article is based upon work from COST Action 17124 “Digital forensics: evidence analysis via intelligent systems and practices (DigForASP)”, supported by COST (European Cooperation in Science and Technology). The work is also partially funded by the Universit` a degli Studi di Bari “Aldo Moro” under the 2017-2018 grant “Metodi di Intelligenza Artificiale per l’Informatica Forense”. References 1. Brewka, G., Eiter, T., Truszczynski, M.: Answer set programming at a glance. Commun. ACM 54(12), 92–103 (2011). http://doi.acm.org/10.1145/2043174.2043195 2. Costantini, S., De Gasperis, G., Olivieri, R.: How answer set programming can help in digital forensic investigation. In: Ancona, D., Maratea, M., Mascardi, V. (eds.) Proceedings of the 30th Italian Conference on Computational Logic, Genova, Italy, 1–3 July 2015. CEUR Workshop Proceedings, vol. 1459, pp. 53–65. CEUR-WS.org (2015). http://ceur-ws.org/Vol-1459/paper29.pdf 3. Costantini, S., De Gasperis, G., Olivieri, R.: Digital forensics and investigations meet artificial intelligence. Ann. Math. Artif. Intell. 86(1-3), 193–229 (2019). https://doi.org/10.1007/s10472-019-09632-y 4. Costantini, S., Lisi, F.A., Olivieri, R.: DigForASP: a european cooperation network for logic-based AI in digital forensics. In: Casagrande, A., Omodeo, E.G. (eds.) Proceedings of the 34th Italian Conference on Computational Logic, Trieste, Italy, 19–21 June 2019. CEUR Workshop Proceedings, vol. 2396, pp. 138–146. CEURWS.org (2019). http://ceur-ws.org/Vol-2396/paper34.pdf 5. De Raedt, L., Guns, T., Nijssen, S.: Constraint programming for data mining and machine learning. In: Twenty-Fourth AAAI Conference on Artificial Intelligence (2010) 6. Dong, G., Bailey, J.: Contrast Data Mining: Concepts, Algorithms, and Applications. CRC Press, Boca Raton (2012) 7. Gebser, M., Guyet, T., Quiniou, R., Romero, J., Schaub, T.: Knowledge-based sequence mining with ASP. In: IJCAI 2016–25th International Joint Conference on Artificial Intelligence, p. 8. AAAI (2016) 8. Gebser, M., Kaminski, R., Kaufmann, B., Schaub, T.: Clingo = ASP + control: preliminary report. arXiv preprint arXiv:1405.3694 (2014) 9. Gebser, M., Kaufmann, B., Kaminski, R., Ostrowski, M., Schaub, T., Schneider, M.: Potassco: the Potsdam answer set solving collection. AI Commun. 24(2), 107– 124 (2011) 10. Guns, T., Dries, A., Nijssen, S., Tack, G., De Raedt, L.: MiningZinc: a declarative framework for constraint-based mining. Artif. Intell. 244, 6–29 (2017) 11. Guyet, T., Moinard, Y., Quiniou, R., Schaub, T.: Efficiency analysis of ASP encodings for sequential pattern mining tasks. In: Pinaud, B., Guillet, F., Cremilleux, B., de Runz, C. (eds.) Advances in Knowledge Discovery and Management. SCI, vol. 732, pp. 41–81. Springer, Cham (2018). https://doi.org/10.1007/978-3-31965406-5 3 F. A. Lisi and G. Sterlicchio 12. Han, J., Cheng, H., Xin, D., Yan, X.: Frequent pattern mining: current status and future directions. Data Min. Knowl. Discov. 15(1), 55–86 (2007). https://doi.org/ 10.1007/s10618-006-0059-1 13. Jabbour, S., Sais, L., Salhi, Y.: Decomposition based SAT encodings for itemset mining problems. In: Cao, T., Lim, E.-P., Zhou, Z.-H., Ho, T.-B., Cheung, D., Motoda, H. (eds.) PAKDD 2015. LNCS (LNAI), vol. 9078, pp. 662–674. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-18032-8 52 14. Leone, N., et al.: Enhancing DLV for large-scale reasoning. In: Balduccini, M., Lierler, Y., Woltran, S. (eds.) Logic Programming and Nonmonotonic Reasoning - 15th International Conference, LPNMR 2019, Philadelphia, PA, USA, 3–7 June 2019, Proceedings. LNCS, vol. 11481, pp. 312–325. Springer, Cham (2019). https:// doi.org/10.1007/978-3-030-20528-7 23 15. Lisi, F.A., Sterlicchio, G.: Declarative pattern mining in digital forensics: preliminary results. In: Calegari, R., Ciatto, G., Omicini, A. (eds.) Proceedings of the 37th Italian Conference on Computational Logic, Bologna, Italy, June 29–1 July 2022. CEUR Workshop Proceedings, vol. 3204, pp. 232–246. CEUR-WS.org (2022). http://ceur-ws.org/Vol-3204/paper 23.pdf 16. Lisi, F.A., Sterlicchio, G.: Mining sequences in phone recordings with answer set programming. In: Bruno, P., Calimeri, F., Cauteruccio, F., Maratea, M., Terracina, G., Vallati, M. (eds.) HYDRA - RCRA 2022: 1st International Workshop on Hybrid Models for Coupling Deductive and Inductive Reasoning and 29th RCRA Workshop on Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion. CEUR Workshop Proceedings. CEUR-WS.org (2022) 17. Negrevergne, B., Guns, T.: Constraint-based sequence mining using constraint programming. In: Michel, L. (ed.) CPAIOR 2015. LNCS, vol. 9075, pp. 288–305. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-18008-3 20 18. Paramonov, S., Stepanova, D., Miettinen, P.: Hybrid ASP-based approach to pattern mining. Theory Pract. Log. Program. 19(4), 505–535 (2019). https://doi.org/ 10.1017/S1471068418000467 Graphs and Networks Approximate Inference in Probabilistic Answer Set Programming for Statistical Probabilities Damiano Azzolini1(B) , Elena Bellodi2 , and Fabrizio Riguzzi3 1 Dipartimento di Scienze dell’Ambiente e della Prevenzione, Universit` a di Ferrara, Ferrara, Italy [emailprotected] 2 Dipartimento di Ingegneria, Universit` a di Ferrara, Ferrara, Italy [emailprotected] Dipartimento di Matematica e Informatica, Universit` a di Ferrara, Ferrara, Italy [emailprotected] Abstract. “Type 1” statements were introduced by Halpern in 1990 with the goal to represent statistical information about a domain of interest. These are of the form “x% of the elements share the same property”. The recently proposed language PASTA (Probabilistic Answer set programming for STAtistical probabilities) extends Probabilistic Logic Programs under the Distribution Semantics and allows the definition of this type of statements. To perform exact inference, PASTA programs are converted into probabilistic answer set programs under the Credal Semantics. However, this algorithm is infeasible for scenarios when more than a few random variables are involved. Here, we propose several algorithms to perform both conditional and unconditional approximate inference in PASTA programs and test them on different benchmarks. The results show that approximate algorithms scale to hundreds of variables and thus can manage real world domains. Keywords: Probabilistic Answer Set Programming · Credal Semantics · Statistical statements · Approximate inference In [14] Halpern discusses the difference between “Type 1” (T1) and “Type 2” (T2) statements: the former describes a statistical property of the world of interest while the latter represents a degree of belief. “The probability that a random person smokes is 20%” is an example of “Type 1” statement while “John smokes with probability 30%”, where John is a particular individual, is an example of “Type 2” statement. Answer Set Programming (ASP) [7] is a powerful language that allows to easily encode complex domains. However, ASP does not allow uncertainty on the data. To handle this, we need to consider Probabilistic ASP (PASP) where the uncertainty is expressed through probabilistic facts, as done in Probabilistic c The Author(s) 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 33–46, 2023. https://doi.org/10.1007/978-3-031-27181-6_3 D. Azzolini et al. Logic Programming [10]. We focus here on PASP under the Credal Semantics [9], where each query is associated with a probability interval defined by a lower and an upper bound. Recently, the authors of [3] introduced PASTA (“Probabilistic Answer set programming for STAtistical probabilities”), a new language (and software) where statistical statements are translated into PASP rules and inference is performed by converting the PASP program into an equivalent answer set program. However, performing exact inference is exponential in the number of probabilistic facts, and thus it is infeasible in the case of more than a few dozens of variables. In this paper, we propose four algorithms to perform approximate inference in PASTA programs: one for unconditional sampling and three for conditional sampling that adopt rejection sampling, Metropolis Hastings sampling, and Gibbs sampling. Empirical results show that our algorithms can handle programs with hundreds of variables. Moreover, we compare our algorithms with PASOCS [23], a solver able to perform approximate inference in PASP program under the Credal Semantics, showing that our algorithms reach a comparable accuracy in a lower execution time. The paper is structured as follows: Sect. 2 discusses some related works and Sect. 3 introduces background concepts. Section 4 describes our algorithms for approximate inference in PASTA programs that are tested in Sect. 5. Section 6 concludes the paper. Related Work PASTA [3] extends Probabilistic Logic Programming [20] under the Distribution Semantics [21] by allowing the definition of Statistical statements. Statistical statements, also referred to as “Probabilistic Conditionals”, are discussed in [16], where the authors give a semantics to T1 statements leveraging the maximum entropy principle. Under this interpretation, they consider the unique model that yields the maximum entropy. Differently from them, we consider all the models, thus obtaining a more general framework [3]. T1 statements are also studied in [15] and [24]: the former adopts the cross entropy principle to assign a semantics to T1 statements while the latter identifies only a specific model and a sharp probability value, rather than all the models and an interval for the probability, as we do. We adopt the credal semantics [9] for PASP, where the probability of a query is defined by a range. To the best of our knowledge, the only work which performs inference in PASP under the Credal Semantics is PASOCS [23]. They propose both an exact solver, which relies on the generation of all the possible combinations of facts, and an approximate one, based on sampling. We compare our approach with it in Sect. 5. Other solutions for inference in PASP consider different semantics that assign to a query a sharp probability value, such as [6,17,19,22]. Approximate Algorithms for Statistical Probabilities We assume that the reader is familiar with the basic concepts of Logic Programming. For a complete treatment of the field, see [18]. An Answer Set Programming (ASP) [7] rule has the form h1 ; ... ; hm :- b1, ... , bn. where each hi is an atom, each bi is a literal and :- is called the neck operator. The disjunction of the his is called the head while the conjunction of the bis is called the body of the rule. Particular configurations of the atoms/literals in the head/body identify specific types of rules: if the head is empty and the body is not, the rule is a constraint. Likewise, if the body is empty and the head is not, the rule is a fact, and the neck operator is usually omitted. We consider only rules where every variable also appears in a positive literal in the body. These rules are called safe. Finally, a rule is called ground if it does not contain variables. In addition to atoms and literals, we also consider aggregate atoms of the form γ1 ω1 #ζ{1 , . . . , l } ω2 γ2 where γ1 and γ2 are constants or variables called guards, ω1 and ω2 are arithmetic comparison operators (such as >, ≥, 0. Moreover, each variable in t1 , . . . , ti also appears in F . We denote an answer set program with P and its Herbrand base, i.e., the set of atoms that can be constructed with all the symbols in it, as BP . An interpretation I ⊂ BP satisfies a ground rule when at least one of the his is true in I when the body is true in I. A model is an interpretation that satisfies all the ground rules of a program P. The reduct [11] of a ground program Pg with respect to an interpretation I is a new program Pgr obtained from Pg by removing the rules in which a bi is false in I. Finally, an interpretation I is an answer set for P if it is a minimal model of Pgr . We consider minimality in terms of set inclusion and denote with AS(P) the set of all the answer sets of P. Probabilistic Answer Set Programming (PASP) [8] is to Answer Set Programming what Probabilistic Logic Programming [20] is to Logic Programming: it allows the definition of uncertain data through probabilistic facts. Following the ProbLog [10] syntax, these facts can be represented with Π :: f where f is a ground atom and Π is its probability. If we assign a truth value to every probabilistic fact (where represents true and ⊥ represents false) we obtain a world, i.e., an answer set program. There are 2n worlds for a probabilistic answer set program, where n is the number of ground probabilistic facts. Many Probabilistic Logic Programming languages rely on the distribution semantics [21], according to which the probability of a world w is computed with the formula Πi · (1 − Πi ) P (w) = i|fi = i|fi =⊥ while the probability of a query q (conjunction of ground literals), is computed with the formula P (q) = P (w) w|=q when the world has a single answer set. D. Azzolini et al. For performing inference in PASP we consider the Credal Semantics [8], where every query q is associated with a probability range: the upper probability bound P(q) is given by the sum of the probabilities of the worlds w where there is at least one answer set of w where the query is present. Conversely, the lower probability bound P(q) is given by the sum of the probabilities of the worlds w where the query is present in all the answer sets of w, i.e., P(q) = P (wi ), P(q) = P (wi ) wi |∃m∈AS(wi ), m|=q wi ||AS(wi )|>0 ∧ ∀m∈AS(wi ), m|=q Note that the credal semantics requires that every world has at least one answer set. In the remaining part of the paper we consider only programs where this requirement is satisfied. Example 1 (PASP Example). We consider 3 objects whose components are unknown and suppose that some of them may be made of iron with a given probability. An object made of iron may get rusty or not. We want to know the probability that a particular object is rusty. This can be modelled with: 1 2 3 4 5 0.2:: iron (1) . 0.9:: iron (2) . 0.6:: iron (3) . rusty ( X ) ; not_rusty ( X ) : - iron ( X ) . : - # count { X : rusty ( X ) , iron ( X ) } = RI , # count { X : iron ( X ) } = I , 10* RI < 6* I . The constraint states that at least 60% of the object made of iron are rusty. This program has 23 = 8 worlds. For example, the world where all the three probabilistic facts are true has 4 answer sets. If we consider the query q rusty(1), this world only contributes to the upper probability since the query is present only in 3 of the 4 answer sets. By considering all the worlds, we get P(q) = 0.092 and P(q) = 0.2, so the probability of the query lies in the range [0.092, 0.2]. If we want to compute the conditional probability for a query q given evidence e, P (q | e), we need to consider two different formulas for the lower and upper probability bounds [8]: P(q | e) = P(q, e) P(q, e) , P(q | e) = P(q, e) + P(¬q, e) P(q, e) + P(¬q, e) Clearly, these are valid if the denominator is different from 0, otherwise the value is undefined. If we consider again Example 1 with query q rusty(1) and evidence e iron(2), we get P(q | e) = 0.08 and P(q | e) = 0.2. Following the syntax proposed in [3], a probabilistic conditional is a formula of the form (C | A)[Πl , Πu ] stating that the fraction of As that are also Cs is between Πl and Πu . Both C and A are two conjunctions of literals. To perform inference, a conditional is converted into three answer set rules: i) C ; not C :- A, ii) :- #count{X : C, A} = V0, #count{X : A} = V1, 10*V0 < 10*Πl *V1, and iii) :- #count{X : C, A} = V0, #count{X : Approximate Algorithms for Statistical Probabilities A} = V1, 10*V0 > 10*Πu *V1, where X is a vector of elements containing all the variables in C and A. If Πl or Πu are respectively 0 or 1, the rules ii) or iii) can be omitted. Moreover, if the probability values Πl and Πu have n decimal digits, the 10 in the multiplications above should be replaced with 10n , because ASP cannot deal with floating point values. A PASTA program [3] is composed of a set of probabilistic facts, a set of ASP rules, and a set of probabilistic conditionals. Example 2 (Probabilistic Conditional (PASTA program)). The following program 1 2 0.2:: iron (1) . 0.9:: iron (2) . 0.6:: iron (3) . ( rusty ( X ) | iron ( X ) ) [0.6 ,1]. is translated into the PASP program shown in Example 1. The rule iii) is omitted since Πu = 1. In [3] an exact inference algorithm was proposed to perform inference with probabilistic conditionals, that basically requires the enumeration of all the worlds. This is clearly infeasible when the number of variables is greater than 20–30. To overcome this issue, in the following section we present different algorithms that compute the probability interval in an approximate way based on sampling techniques. Approximate Inference for PASTA Programs To perform approximate inference in PASTA programs, we developed four algorithms: one for unconditional sampling (Algorithm 1) and three for conditional sampling that adopt rejection sampling (Algorithm 2), Metropolis Hastings sampling (Algorithm 3), and Gibbs sampling (Algorithm 4) [4,5]. Algorithm 1 describes the basic procedure to sample a query (without evidence) in a PASTA program. First, we keep a list of sampled worlds. Then, for a given n number of times (number of samples), we sample a world id with function SampleWorld by choosing a truth value for every probabilistic fact according to its probability. For every probabilistic facts, the process is the following: we sample a random value between 0 and 1, call it r. If r < Πi for a given probabilistic fact fi with associated probability Πi , fi is set to true, otherwise false. id is a binary string representing a world where, if the nth digit is 0, the nth probabilistic fact (in order of appearance in the program) is false, true otherwise. To clarify this, if we consider the program shown in Example 2, a possible world id could be 010, indicating that iron(1) is not selected, iron(2) is selected, and iron(3) is not selected. The probability of this world is (1 − 0.2) · 0.9 · (1 − 0.6) = 0.288. If we have already considered the currently sampled world, we look in the list of sampled worlds whether it contributes to the lower or upper counters (function GetContribution) and update the lower (lp) and upper (up) counters accordingly. In particular, GetContribution returns two values, one for the lower and one for the upper probability, each of which can be either 0 (the world id D. Azzolini et al. does not contribute to the probability) or 1 (the world id contributes to the probability). If, instead, the world had never been encountered before, we assign a probability value to the probabilistic facts in the program according to the truth value (probability Π for , 1 − Π for ⊥) that had been sampled (function SetFacts), we compute its contribution to the lower and upper probabilities (function CheckLowerUpper, with the same output as GetContribution), and store the results in the list of already encountered worlds (function InsertContribution). In this way, if we sample again the same world, there is no need to compute again its contribution to the two probability bounds. Once we have a number of samples equal to Samples, we simply return the number of samples computed for the lower and upper probability divided by Samples. Algorithm 1. Function Sample: computation of the unconditional probability from a PASTA program. 1: function Sample(Query, Samples, Program) 2: sampled ← {} 3: lp ← 0, up ← 0, n ← 0 4: while n ≤ Samples do 5: id ←SampleWorld(Program) 6: n←n+1 7: if id ∈ sampled then 8: up 0 , lp0 ← GetContribution(sampled, id) 9: up ← up + up 0 10: lp ← lp + lp 0 11: else 12: Program d ←SetFacts(Program, id) 13: lp 0 , up 0 ← CheckLowerUpper(Program d ) 14: lp ← lp + lp 0 15: up ← up + up 0 16: InsertContribution(sampled, id, lp 0 , up 0 ) 17: end if 18: end while lp up 19: return Samples , Samples 20: end function list of sampled worlds Samples is the number of samples a world was already sampled When we need to account also for the evidence, other algorithms should be applied, such as rejection sampling. It is described in Algorithm 2: as in Algorithm 1, we maintain a list with the already sampled worlds. Moreover, we need 4 variables to store the joint lower and upper counters of q and e (lpqe and upqe) and ¬q and e (lpnqe and upnqe), see Eq. 1. Then, with the same procedure as before, we sample a world. If we have already considered it, we retrieve its contribution from the sampled list. If not, we set the probabilistic facts according to the sampled choices, compute the contribution to the four values, update them accordingly, and store the results. lpqe 0 is 1 if both the evidence and the query are present in all the answer sets of the current world, 0 otherwise. upqe 0 is 1 if both the evidence and the query are present in at least one answer set of the current world, 0 otherwise. lpnqe 0 is 1 if the evidence is present and the query is absent in all the answer sets of the current world, 0 otherwise. upnqe 0 is 1 if the evidence is present and the query is absent in at least one answer set of the current world, 0 otherwise. As before, we return the ratio between the number of samples combined as in Eq. 1. Approximate Algorithms for Statistical Probabilities Algorithm 2 . Function RejectionSample: computation of the conditional probability from a PASTA program using Rejection sampling. 1: function RejectionSample(Query, Evidence, Samples, Program) 2: lpqe ← 0, upqe ← 0, lpnqe ← 0, upnqe ← 0, n ← 0, sampled ← {} 3: while n ≤ Samples do 4: id ←SampleWorld(Program) 5: n←n+1 6: if id ∈ sampled then 7: lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 ← GetContribution(sampled, id) 8: lpqe ← lpqe +lpqe0 , upqe ← upqe + upqe 0 9: lpnqe ← lpnqe + lpnqe 0 , upnqe ← upnqe + upnqe 0 10: else 11: Program d ← SetFacts(Program, id) 12: lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 ← CheckLowerUpper(Program d ) 13: lpqe ← lpqe +lpqe0 , upqe ← upqe + upqe 0 14: lpnqe ← lpnqe + lpnqe 0 , upnqe ← upnqe + upnqe 0 15: InsertContribution(sampled, id, lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 ) 16: end if 17: end while lpqe upqe 18: return lpqe + upnqe , upqe + lpnqe 19: end function In addition to rejection sampling, we developed two other algorithms that mimic Metropolis Hastings sampling (Algorithm 3) and Gibbs sampling (Algorithm 4). Algorithm 3 proceeds as follows. The overall structure is similar to Algorithm 2. However, after sampling a world, we count the number of probabilistic facts set to true (function CountTrueFacts). Then, with function CheckContribution we check whether the current world has already been considered. If so, we accept it with probability min(1, N0 /N1 ) (line 18), where N0 is the number of true probabilistic facts in the previous iteration and N1 is the number of true probabilistic facts in the current iteration. If the world was never considered before, we set the truth values of the probabilistic facts in the program (function SetFacts), compute its contribution with function CheckLowerUpper, save the values (function InsertContribution), and check whether the sample is accepted or not (line 27) with the same criteria just discussed. As for rejection sampling, we return the ratio between the number of samples combined as in Eq. 1. Finally, for Gibbs sampling (Algorithm 4), we first sample a world until e is true (function TrueEvidence), saving, as before, the already encountered worlds. Once we get a world that satisfies this requirement, we switch the truth values of Block random probabilistic facts (function SwitchBlockValues, line 19) and we check the contribution of this new world as in Algorithm 2. Also there, the return value is the one described by Eq. 1. We implemented the previously described algorithms in Python 3 and we integrated them into the PASTA1 solver [3]. We use clingo [12] to compute the answer 1 Source code and datasets available at https://github.com/damianoazzolini/pasta. D. Azzolini et al. Algorithm 3. Function MHSample: computation of the conditional probability from a PASTA program using Metropolis Hastings sampling. 1: function MHSample(Query, Evidence, Samples, Program) 2: sampled ← {} 3: lpqe ← 0, upqe ← 0, lpnqe ← 0, upnqe ← 0, n ← 0, trueFacts 0 ← 0 4: while n ≤ Samples do 5: id ←SampleWorld(Program) 6: n←n+1 7: trueFacts 1 ← CountTrueFacts(id) 8: lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 ← 9: CheckContribution(Program d , trueFacts 0 , trueFacts 1 , id, sampled) 10: lpqe ← lpqe +lpqe0 , upqe ← upqe + upqe 0 11: lpnqe ← lpnqe + lpnqe 0 , upnqe ← upnqe + upnqe 0 12: trueFacts 0 ← trueFacts 1 13: end while lpqe upqe 14: return lpqe + upnqe , upqe + lpnqe 15: end function 16: function CheckContribution(Program d , N0 , N1 , id, sampled) 17: if id ∈ sampled then 18: if random < min(1, N0 /N1 ) then random is a random value 19: return GetContribution(id, sampled) 20: else 21: return 0, 0, 0, 0 22: end if 23: else 24: Program d ← SetFacts(Program, id) 25: lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 ← CheckLowerUpper(Program d ) 26: InsertContribution(sampled, id, lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 ) 27: if random < min(1, N0 /N1 ) then 28: return lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 29: else 30: return 0, 0, 0, 0 31: end if 32: end if 33: end function ∈ [0, 1] sets. To assess the performance, we ran multiple experiments on a computer with R R Xeon E5-2630v3 running at 2.40 GHz with 16 Gb of RAM. Execution Intel times are computed with the bash command time. The reported values are from the real field. We consider two datasets with different configurations. The first one, iron, contains programs with the structure shown in Example 2. In this case, the size of an instance indicates the number of probabilistic facts. The second dataset, smoke, describes a network where some people are connected by a probabilistic friendship relation. In this case the size of an instance is the number of involved people. Some of the people in the network smoke. A conditional states that at least 40% of the people that have a friend that smokes are smokers. An example of instance of size 5 is 1 2 3 4 5 0.5:: friend (a , b ) . 0.5:: friend (b , c ) . 0.5:: friend (a , d ) . 0.5:: friend (d , e ) . 0.5:: friend (e , c ) . smokes ( b ) . smokes ( d ) . ( smokes ( Y ) | smokes ( X ) , friend (X , Y ) ) [0.4 ,1]. Approximate Algorithms for Statistical Probabilities Algorithm 4. Function GibbsSample: computation of the conditional probability from a PASTA program using Gibbs sampling. 1: function GibbsSample(Query, Evidence, Samples, Block , Program) 2: sampledEvidence ← {}, sampledQuery ← {} 3: lpqe ← 0, upqe ← 0, lpnqe ← 0, upnqe ← 0, n ← 0 4: while n ≤ Samples do 5: ev ← false, n ← n + 1 6: while ev is false do 7: id ←SampleWorld(P rogram) 8: if id ∈ sampledEvidence then 9: ev ← sampledEvidence[id] 10: else 11: Program d ← SetFacts(Program, id) 12: if TrueEvidence(Program d ) then 13: ev ← true, sampledEvidence[id] ← true 14: else 15: sampledEvidence[id] ← false 16: end if 17: end if 18: end while 19: id s ←SwitchBlockValues(id, Block , Program, Evidence) 20: if id s ∈ sampled then 21: lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 ← GetContribution(sampled, id) 22: lpqe ← lpqe +lpqe0 , upqe ← upqe + upqe 0 23: lpnqe ← lpnqe + lpnqe 0 , upnqe ← upnqe + upnqe 0 24: else 25: Program d ← SetFacts(Program, id) 26: lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 ← CheckLowerUpper(Program d ) 27: lpqe ← lpqe +lpqe0 , upqe ← upqe + upqe 0 28: lpnqe ← lpnqe + lpnqe 0 , upnqe ← upnqe + upnqe 0 29: InsertContribution(sampled, id, lpqe 0 , upqe 0 , lpnqe 0 , upnqe 0 ) 30: end if 31: end while lpqe upqe 32: return lpqe + upnqe , upqe + lpnqe 33: end function The number of probabilistic facts follows a Barab´ asi-Albert preferential attachment model generated with the networkx [13] Python package. The initial number of nodes of the graph, n, is the size of the instance while the number of edges to connect a new node to an existing one, m, is 3. In a first set of experiments, we fixed the number of probabilistic facts, for iron, and the number of people, for smoke, to 10 and plotted the computed lower and upper probabilities and the execution time by increasing the number of samples. All the probabilistic facts have probability 0.5. The goal of these experiments is to check how many samples are needed to converge and how the execution time varies by increasing the number of samples, with a fixed program. For the iron dataset, the query q is rusty(1) and the evidence e is iron(2). Here, the exact values are P(q) = 0.009765625, P(q) = 0.5, P(q | e) = 0.001953125, and P(q | e) = 0.5. For the smoke dataset, the program has 21 connections (probabilistic facts): node 0 is connected to all the other nodes, node 2 with 4, 6, and 8, node 3 with 4, 5, and 7, node 4 with 5, 6, 7, and 9, and node 7 with 8 and 9. All the connections have probability 0.5. Nodes 2, 5, 6, 7, and 9 certainly smoke. The query q is smokes(8) and the evidence is smokes(4). The targets are P(q) = 0.158, P(q) = 0.75, P(q | e) = 0, and P(q | e) = 0.923. Results for all the four algorithms are shown in Figs. 1 (iron) and 2 (smoke). D. Azzolini et al. Fig. 1. Comparison of the sampling algorithms on the iron dataset. Straight lines are the results for PASTA while dashed lines for PASOCS. Fig. 2. Comparison of the sampling algorithms on the smoke dataset. Straight lines are the results for PASTA while dashed lines for PASOCS. In Fig. 2a the target line at 0.75 is for the upper unconditional probability. For Gibbs sampling, we set the number Block (i.e., number of probabilistic facts to resample), to 1. All the algorithms seem to stabilize after a few thousands of samples for both datasets. For iron, MH seems to slightly overestimate the upper probability. Gibbs and rejection sampling require a few seconds to take 106 samples, while Metropolis Hastings (MH) requires almost 100 s. However, for the smoke dataset, MH and Rejection sampling have comparable execution times (more than 100 s for 5 · 105 samples) while Gibbs is the slowest among the three. This may be due to a low probability of the evidence. We compared our results with PASOCS [23] (after translating by hand the probabilistic conditionals in PASP rules). We used the following settings: -n min n -n max -1 -ut -1 -p 300 -sb 1 -b 0 where n is the number of considered samples, n min is the minimum number of samples, n max is the maximum number of samples (-1 deactivates it), ut is the uncertainty threshold (-1 deactivates it), p is the percentile (since they estimate values with gaussians), sb is the number of samples to run at once during sampling, and b is the burnin value for Gibbs and Metropolis Hastings sampling (0 deactivates it). We do not select parallel Approximate Algorithms for Statistical Probabilities Fig. 3. Comparison of Gibbs sampling on the iron dataset. Fig. 4. Comparison of Gibbs sampling and MH on the smoke dataset. solving, since PASTA is not parallelized yet (this may be the subject of a future work). PASOCS adopts a different approach for conditional inference: at each iteration, instead of sampling a world, it updates the probabilities of the probabilistic facts and samples a world using these values. In Fig. 1b, the execution times of PASOCS for all the tested algorithms are comparable and seem to grow exponentially with the number of samples. The lines for rejection and unconditional sampling for PASTA overlap. This also happens for the lines for MH, Gibbs, and rejection sampling for PASOCS. PASOCS seems to be slower also on the smoke dataset (Fig. 2b), but the difference with PASTA is smaller. We also plotted how PASTA and PASOCS perform in terms of number of samples required to converge. In Fig. 3, we compare Gibbs sampling on the iron dataset. Here, PASTA seems to be more stable on both lower and upper probability. However, even with 5000 samples, both still underestimate the lower probability, even if the values are considerably small. In Fig. 4 we compare PASOCS and PASTA on Gibbs sampling and Metropolis Hastings sampling on the iron dataset. Also here, PASTA seems more stable, but both algorithms are not completely settled on the real probability after 5000 samples. Finally, Fig. 5 compares the unconditional sampling of PASTA and PASOCS on both datasets. Here, the results are similar: after approximately 3000 samples, the computed probability D. Azzolini et al. Fig. 5. Comparison of unconditional sampling on the iron and the smoke datasets. Fig. 6. Comparison between PASTA and PASOCS by increasing the number of probabilistic facts for the iron dataset. seems to be stabilized. In another experiment, we fixed the number of samples to 1000, increased the size of the instances for the iron dataset, and plot how the execution time varies with PASTA and PASOCS. The goal is to check how the execution time varies by increasing the number of samples. The query is rusty(1). Results are shown in Fig. 6. For PASOCS, we get a memory error starting from size 32. PASTA requires approximately 500 s to take 1000 samples on a program with the structure of Example 2 with 1500 probabilistic facts. Note again that, during sampling, we assume that every world has at least one answer set, since if we need to check this, all the worlds must be generated and clearly the inference will not scale. In this paper, we propose four algorithms to perform approximate inference, both conditional and unconditional, in PASTA programs. We tested the execution time and the accuracy also against the PASOCS solver (after manually performing the conversion of probabilistic conditionals). Empirical results show that our algorithms reach a comparable accuracy in a lower execution time. As future work, we plan to better investigate the convergence of the algorithms and to develop approximate methods for abduction [1,2] in PASTA programs. Approximate Algorithms for Statistical Probabilities References 1. Azzolini, D., Bellodi, E., Ferilli, S., Riguzzi, F., Zese, R.: Abduction with probabilistic logic programming under the distribution semantics. Int. J. Approx. Reason. 142, 41–63 (2022). https://doi.org/10.1016/j.ijar.2021.11.003 2. Azzolini, D., Bellodi, E., Riguzzi, F.: Abduction in (probabilistic) answer set programming. In: Calegari, R., Ciatto, G., Omicini, A. (eds.) Proceedings of the 36th Italian Conference on Computational Logic. CEUR Workshop Proceedings, vol. 3204, pp. 90–103. Sun SITE Central Europe, Aachen, Germany (2022) 3. Azzolini, D., Bellodi, E., Riguzzi, F.: Statistical statements in probabilistic logic programming. In: Gottlob, G., Inclezan, D., Maratea, M. (eds.) Logic Programming and Nonmonotonic Reasoning (LPNMR 2022), LNCS, vol. 13416, pp. 43–55. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15707-3 4 4. Azzolini, D., Riguzzi, F., Lamma, E.: An analysis of Gibbs sampling for probabilistic logic programs. In: Dodaro, C., et al. (eds.) Workshop on Probabilistic Logic Programming (PLP 2020). CEUR-WS, vol. 2678, pp. 1–13. Sun SITE Central Europe, Aachen, Germany (2020) 5. Azzolini, Damiano, Riguzzi, Fabrizio, Masotti, Franco, Lamma, Evelina: A comparison of MCMC sampling for probabilistic logic programming. In: Alviano, Mario, Greco, Gianluigi, Scarcello, Francesco (eds.) AI*IA 2019. LNCS (LNAI), vol. 11946, pp. 18–29. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35166-3 2 6. Baral, C., Gelfond, M., Rushton, N.: Probabilistic reasoning with answer sets. Theor. Pract. Log. Prog. 9(1), 57–144 (2009). https://doi.org/10.1017/ S1471068408003645 7. Brewka, G., Eiter, T., Truszczy´ nski, M.: Answer set programming at a glance. Commun. ACM 54(12), 92–103 (2011). https://doi.org/10.1145/ 2043174.2043195 8. Cozman, F.G., Mau´ a, D.D.: On the semantics and complexity of probabilistic logic programs. J. Artif. Intell. Res. 60, 221–262 (2017). https://doi.org/10.1613/jair. 5482 9. Cozman, F.G., Mau´ a, D.D.: The joy of probabilistic answer set programming: Semantics, complexity, expressivity, inference. Int. J. Approx. Reason. 125, 218– 239 (2020). https://doi.org/10.1016/ j.ijar.2020.07.004 10. De Raedt, L., Kimmig, A., Toivonen, H.: ProbLog: a probabilistic Prolog and its application in link discovery. In: Veloso, M.M. (ed.) IJCAI 2007, vol. 7, pp. 2462– 2467. AAAI Press/IJCAI (2007) 11. Faber, W., Pfeifer, G., Leone, N.: Semantics and complexity of recursive aggregates in answer set programming. Artif. Intell. 175(1), 278–298 (2011). https://doi.org/ 10.1016/ j.artint.2010.04.002 12. Gebser, M., Kaminski, R., Kaufmann, B., Schaub, T.: Multi-shot ASP solving with clingo. Theory Pract. Logic Program. 19(1), 27–82 (2019). https://doi.org/ 10.1017/ S1471068418000054 13. Hagberg, A.A., Schult, D.A., Swart, P.J.: Exploring network structure, dynamics, and function using NetworkX. In: Varoquaux, G., Vaught, T., Millman, J. (eds.) Proceedings of the 7th Python in Science Conference, pp. 11–15. Pasadena, CA, USA (2008) 14. Halpern, J.Y.: An analysis of first-order logics of probability. Artif. Intell. 46(3), 311–350 (1990) 15. Jaeger, M.: Probabilistic reasoning in terminological logics. In: Doyle, J., Sandewall, E., Torasso, P. (eds.) 4th International Conference on Principles of Knowledge Representation and Reasoning, pp. 305–316. Morgan Kaufmann (1994). https:// doi.org/10.1016/B978-1-4832-1452-8.50124-X D. Azzolini et al. 16. Kern-Isberner, G., Thimm, M.: Novel semantical approaches to relational probabilistic conditionals. In: Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning, pp. 382–392. AAAI Press (2010) 17. Lee, J., Wang, Y.: A probabilistic extension of the stable model semantics. In: AAAI Spring Symposia (2015) 18. Lloyd, J.W.: Foundations of logic programming, 2nd edn. Springer, Heidelberg (1987). https://doi.org/10.1007/978-3-642-83189-8 19. Nickles, Matthias: A tool for probabilistic reasoning based on logic programming and first-order theories under stable model semantics. In: Michael, Loizos, Kakas, Antonis (eds.) JELIA 2016. LNCS (LNAI), vol. 10021, pp. 369–384. Springer, Cham (2016). https://doi.org/10.1007/ 978-3-319-48758-8 24 20. Riguzzi, F.: Foundations of Probabilistic Logic Programming: Languages, Semantics, Inference and Learning. River Publishers, Gistrup, Denmark (2018) 21. Sato, T.: A statistical learning method for logic programs with distribution semantics. In: Sterling, L. (ed.) ICLP 1995, pp. 715–729. MIT Press (1995). https://doi. org/10.7551/mitpress/4298.003.0069 22. Totis, P., Kimmig, A., De Raedt, L.: SMProbLog: stable model semantics in ProbLog and its applications in argumentation. arXiv preprint arXiv:2110.01990 (2021) 23. Tuckey, D., Russo, A., Broda, K.: PASOCS: a parallel approximate solver for probabilistic logic programs under the credal semantics. arXiv preprint arXiv:2105.10908 (2021) 24. Wilhelm, M., Kern-Isberner, G., Finthammer, M., Beierle, C.: Integrating typed model counting into first-order maximum entropy computations and the connection to Markov logic networks. In: Bart´ ak, R., Brawner, K.W. (eds.) Proceedings of the Thirty-Second International Florida Artificial Intelligence Research Society Conference, pp. 494–499. AAAI Press (2019) Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. Decision Trees with a Modal Flavor Dario Della Monica1 , Giovanni Pagliarini2,3 , Guido Sciavicco3(B) , and Ionel Eduard Stan2,3 1 University of Udine, Udine, Italy [emailprotected] 2 University of Parma, Parma, Italy {giovanni.pagliarini,ioneleduard.stan}@unife.it 3 University of Ferrara, Ferrara, Italy [emailprotected] Abstract. Symbolic learning is the sub-field of machine learning that deals with symbolic algorithms and models, which have been known for decades and successfully applied to a variety of contexts, and of which decision trees are the quintessential expression. The main limitation of current symbolic models is the fact that they are essentially based on classical propositional logic, which implies that data with an implicit dimensional component, such as temporal, e.g., time series, or spatial data, e.g., images, cannot be properly dealt with within the standard symbolic framework. In this paper, we show how propositional logic in decision trees can be replaced with the more expressive (propositional) modal logics, and we lay down the formal bases of modal decision trees by first systematically delineating interesting and well-known properties of propositional ones and then showing how to transfer these properties to the modal case. Keywords: Machine learning from dimensional data · Decision trees · Modal logic · Learning The most iconic and fundamental separation between sub-fields of machine learning is the one between functional and symbolic learning. Functional learning is the process of learning a function that represents the theory underlying a certain phenomenon, while symbolic learning is the process of learning a logical description that represents that phenomenon. Whether one or the other approach should be preferred raised a long-standing debate among experts, which roots in the fact that functional methods tend to be more versatile and statistically accurate than symbolic ones, while symbolic methods are able to extract models that can be interpreted, explained, and then enhanced using human-expert knowledge. These characteristics of symbolic methods, both for political reasons (consider, for instance, the recent General Data Protection Regulation (GDPR) of the European Union [13], that highlights c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 47–59, 2023. https://doi.org/10.1007/978-3-031-27181-6_4 D. Della Monica et al. the need for interpretable/explainable automatic learning-based decision-making processes, including those involving AI technologies) and technical ones (interpretable models are often easier to train, explore, integrate, and implement), are sometimes used as arguments for preferring a symbolic approach over a functional one. From a logical standpoint, canonical symbolic learning methods are all characterized by the use of propositional logic (they are, indeed, sometimes called propositional methods), and, among them, propositional decision trees are probably the best known. The origin of modern decision trees dates back to the fifties [2]; a lot of work has been done since then, which includes, among others, [4,8,10,11,15– 17], and decision tree models extracted using popular algorithms such as ID3, C4.5, and more recent ones, have been widely applied in the literature. Different decision tree models differ in their structure and the language on which they are based, but only slightly; from a structural point of view, it can be argued that virtually all such structures and learning algorithms stemmed, in some sense, from CART [4], which already contained all the fundamental ideas of decision trees. Dimensional data, such as temporal or spatial data, cannot be dealt with in a proper, native way using propositional decision trees. The general to-go strategy to treat dimensional data with propositional models, such as decision trees, is to flatten the dimensional component, effectively hiding it. Flattening consists in massaging the dataset in such a way that dimensional attributes become scalar ones. As an example, a multivariate time series with n temporal attributes A1 , . . . , An can be transformed by applying one or more feature extractions function to all attributes, e.g., average, minimum, maximum, and the like, to obtain (a feature representation of) an instance f1 (A1 ), f2 (A1 ), . . . , f1 (A2 ), f2 (A2 ), . . . , which can now be treated, for example, by a standard decision tree. A more general approach consists of applying the same strategy to different windows along all dimensions, e.g., intervals in the temporal case, rectangles in the spatial one, and so on, obtaining several new attributes for each original one and each feature extraction function. At the limit, each temporal (spatial, . . . ) point may become a window. As an example, a single-variate time series A with N ordered points ends up being represented as the (unordered) collection A(1), A(2), . . . , A(N ). Such a representation is called lagged (for temporal data) or flattened (for spatial ones). In this paper, we adopt a different point of view, aiming at laying down the formal bases of modal symbolic learning, by means of which dimensional datasets can be dealt with in a native way. To this end, we replace propositional logic by propositional modal logic (modal logic for short) and we enhance decision trees accordingly. Modal logic [3] generalizes propositional logic by allowing one to natively express the relationships that emerge among the different worlds, e.g., time points, time intervals, multi-dimensional areas, that contribute to describe real-world scenarios. Since modal logic can be declined into more practical languages, such as temporal and spatial logics, and dimensional data can be seen as modal data, modal symbolic learning is immediately applicable to the dimen- Modal Decision Trees sional case. Moreover, this is not the only possible application, as modal data emerge in a natural way also from non-dimensional data, like, for instance, in textual and graph-based data. Here, we introduce modal decision trees, and we systematically study their logical properties, specifically, correctness. Standard decision trees are, indeed, correct, although the nature of their presentation, mostly driven by applications, tends to hide their theoretical aspects. While we are not interested in studying efficient implementations of learning algorithms, the driving principle of the definition of modal decision trees is the preservation of the simplicity and interpretability that characterize propositional ones. As a result, modal decision tree learning algorithms can be implemented starting from any implementation of propositional ones, and working one’s way up. The paper is organized as follows. In Sect. 2, we provide some preliminary definitions and concepts. In Sect. 3, we define modal decision trees and study their properties. Then, in Sect. 4, we briefly show how modal decision trees can be applied to learn from dimensional data, before concluding. Let P be a set of propositional letters. The well-formed formulas of modal logic (ML) are obtained from the following grammar: ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | ♦ϕ. The other usual Boolean connectives can be derived from them, and, as standard, we use ϕ to denote ¬♦¬ϕ. The modality ♦ (resp., ) is usually referred to as it is possible that (resp., it is necessary that). Modal logic is considered as archetypical of (propositional) temporal, spatial, and spatio-temporal logics, and it is a non-conservative extension of propositional logic (PL). Its semantics is given in terms of Kripke models. A Kripke model K = (W, R, V ) over P consists of a (finite) set of worlds W , which contains a distinguished world w0 , called initial world, a binary accessibility relation R ⊆ W ×W , and a valuation function V : W → 2P , which associates each world with the set of proposition letters that are true on it. The truth relation K, w ϕ for a model K and a world w in it is expresed by the following clauses: K, w K, w K, w K, w p ¬ϕ ϕ∧ψ ♦ϕ iff p ∈ V (w); iff K, w ϕ; iff K, w ϕ and K, w ψ; iff ∃v s.t. wRv and K, v ϕ. We write K ϕ as an abbreviation for K, w0 ϕ. The importance of modal logic comes from the fact that most classic temporal [5,7,14] and spatial logics [1,9] stem from (generalizations of) modal logic. Therefore, the theory of modal logic and the tools built on it can be reused to cope with more practical situations. We now introduce the notion of modal dataset and its associated problems. D. Della Monica et al. Fig. 1. An example of modal dataset with 4 instances, each described by a Kripke model. Definition 1 (Modal dataset). Let P be a set of proposition letters. A modal dataset I = {I1 , . . . , Im } over P is a finite collection of m instances, each of which is a Kripke model over P, and such that I, J are not bisimilar, for each I, J ∈ I with I = J, that is, there exists at least one formula ϕ ∈ M L with I ϕ and J ϕ. We say that I is labeled if it is equipped with a labeling function L : I → C which associates every instance with a class from a finite set C = {C1 , . . . , Ck }. In the static case, a dataset is usually defined as a collection I = {I1 , . . . , Im } of m instances described, each, by the value of n distinct attributes A = {A1 , . . . , Am }. However, since each attribute A is associated to its finite domain dom(A), that is, the finite set of all values taken by A across I, the latter naturally induces a set of propositional letters: P = {A a |∈ {}, A ∈ A, a ∈ dom(A)}. Learning-wise, therefore, we can always define a static dataset as if the corresponding set of propositional letters is fixed. A modal dataset immediately generalizes a static one, by postulating that instances are described by Kripke frames in which attributes change value across different worlds. There are several scenarios that can be naturally modeled by modal, non-static datasets, instead; by way of example, dimensional datasets are characterized by each attribute in each instance being described by a ddimensional matrix (e.g., d = 1 in the temporal case, and d = 2 in the spatial case). In such cases, fixed a set of feature extraction function(s) F = {f1 , . . . , fk }, the set of induced propositional letters becomes: P = {f (A) a |∈ {}, A ∈ A, a ∈ dom(A), f ∈ F}. Modal Decision Trees Dimensional datasets are not the only source of modal datasets; in fact, our definition of modal dataset is more general, and captures a wide range of practical situations. In the static case two instances cannot be identical, that is, there must be a propositional formula that distinguishes them; at the modal level, this requirement translates into constraining every two instances to be non-bisimilar (see, again, [3]), that is, to be distinguishable by at least one modal formula. In machine learning, several problems are associated to a labeled dataset I. Among them, a fundamental and ubiquitous one is the classification problem, that is, the problem of synthesizing an algorithm (a classifier) that is able to classify the instances of an unlabeled dataset J of the same type as I. In the symbolic context, learning a classifier from a dataset requires extracting from it the logical property that define each class, that is, its characteristic formula. Then, instances are seen as models of the considered logical formalism and the classification task is performed via model checking an instance against characteristic formulas. Although, in principle, one can be interested in learning characteristic formulas of any logic in any dataset, to modal (resp., propositional) datasets it is natural to associate modal (resp., propositional) characteristic formulas. Binary decision trees, which are typical classifiers, are binary trees whose leaves and edges are equipped with labels. Leaf labels identify the different classes an instance can belong to; edge labels are atomic logical elements which are then composed to obtain complex formulas in the considered logical formalism (in the propositional case, edge labels edges are literals and formulas are Boolean combinations). A tree associates a formula to every class it features (i.e., every label occurring in a leaf) and it classifies an instance into a class if and only if the instance satisfies the formula corresponding to that class. As there can be exponentially many leaves in a tree, the classification process can possibly require verifying the satisfaction of an instance against exponentially many formulas. However, decision trees provide an efficient mechanism for classifying an instance that does not explore the entire tree: for every node, starting from the root and going down towards the leaves, the truth of the formula associated with that node is checked against the instance to be classified and, depending on the outcome the instance is passed to the right or the left child and the process is repeated. When a leaf is reached, the instance is classified into the class that labels that leaf. Summing up, the desired properties for a family M of decision trees include: (i) correctness (every tree classifies any given instance into exactly one class); (ii) completeness (for every formula ϕ of the considered formalism, there is a decision tree τ ∈ M that realizes ϕ); and (iii) efficiency (a decision tree τ of height h must be able to classify an instance I by checking the truth of, at most, a number of formulas polynomial in h). In the rest of this paper we consider the problem of designing modal decision trees in such a way to be correct, complete, and efficient with respect to modal logic. D. Della Monica et al. Modal Decision Trees Let τ = (V, E) be a full directed finite binary tree with vertexes in V and edges in E. We denote by root(τ ) the root of τ , by V ⊆ V the set of its leaves, and by V ι the set of its internal nodes (that is, non-root and non-leaf nodes). For each non-leaf node ν we denote by (ν) (resp., (ν)) its left (resp. right) child, and by (ν) its parent. Similarly, for a tree τ , we denote by (τ ) (resp., (τ )) its left (resp. right) subtree. Finally, for a node ν, the set of its ancestors (ν included) is denoted by ∗ (ν), where ∗ is the transitive and reflexive closure of ; we also define + (ν) = ∗ (ν) \ {ν}. A path π τ = ν0 νh in tree τ (or, simply, π, if τ is clear form the context) of length h ≥ 0 from ν0 to νh is a finite sequence of h + 1 nodes ν0 , . . . , νh such that νi = (νi+1 ), for each i = 0, . . . , h − 1. We denote by π1 · π2 the operation of appending the path π2 to the path π1 . We also say that a path ν0 · ν1 νh is left (resp., right) if ν1 = (ν0 ) (resp., ν1 = (ν0 )). For a path π = ν0 νh , the set of its improper prefixes is denoted by prefix (π), and if ν is a node in τ , πντ (or, simply, πν , if τ is clear from the context) denotes the unique path root(τ ) ν. Finally, a branch of τ is a path πτ (or, simply, π , if τ is clear from the context) for some ∈ V . Definition 2 (modal decisions). Fixed a modal dataset I over P, the set of decisions is: Λ = { , ⊥, p, ¬p, ♦ , ⊥ | p ∈ P}. We say that p, ¬p are propositional decisions, while ♦ (resp., ⊥) are modal existential (resp., modal universal) ones, and we use the symbol λ ∈ Λ to denote a decision. For each λ ∈ Λ, the decision that corresponds to its logical negation ¬λ is univocally identified, so when λ = (resp., p, ♦ ) we use ¬λ to denote ⊥ (resp., ¬p, ⊥), and vice versa. Definition 3 (modal decision tree). Fixed a propositional alphabet P and a set of classes C, a modal decision tree τ over P and C is a structure: τ = (V, E, b, l, s), where (V, E) is a full directed finite binary tree, l : V → C is the leaf-labeling function, b : V ι → V ι is the back-edge function, s : E → Λ is the edge-labeling function, and the following conditions hold: ∀ν, ν ∈ V.(b(ν) = ν → ν ∈ ∗ (ν)); ∀ν, ν ∈ V.((b(ν) = ν ∧ b(ν ) = ν ) → b(ν) = b(ν )); ∀ν, ν , ν ∈ V.((b(ν) = ν ∧ ν ∈ + (ν ) ∧ ν ∈ + (ν)) → ν ∈ / V ) → b(ν ) = ν ); ∀(ν, ν ) ∈ E.((s(ν, ν ) ∈ {⊥, ⊥} ∧ ν ∈ ∀(ν, ν ), (ν, ν ) ∈ E.(s(ν, ν ) = ¬s(ν, ν )). 1. 2. 3. 4. 5. (b(ν ))); For every c ∈ C, we denote by leaves τ (c) (or, simply, leaves(c), when τ is clear from the context) the set of leaves of τ labeled with c. Modal Decision Trees A propositional decision tree is a modal decision tree in which edges are labeled with propositional decisions and the back-edge function plays no role (therefore, in propositional decision trees only condition 5 is still non-trivial); thus, propositional decision trees are a particular case of modal decision trees. In the following, we denote by MDT the family of modal decision trees (or modal decision tree classification model ), and by DT its propositional counterpart (that is, the sub-family of MDT that only contains propositional trees). From now on, we use the term decision tree to refer to an element of either DT or MDT . We now show how a modal decision tree defines a modal formula for each of its classes. This is obtained by associating a formula to each branch, and then the formula of a class is the disjunction of all the formulas associated to branches whose leaf is labeled with that class. In the propositional case, each branch is associated to the conjunction of the labels that occur on its edges; as every propositional formula can be written in disjunctive normal form, propositional decision trees are complete with respect to propositional logic. Modal logic does not have a normal form that allows one to bound the nesting of modal operators, and this makes the construction of formulas more complicated. Let us first fix the following useful concepts. Definition 4 (contributor, node agreement). Given a decision tree τ and a path π = ν0 νh , with h > 1, the contributor of π, denoted ctr (π), is defined as the only node νi in π such that νi = ν1 , 0 < i < h, and b(νi ) = ν1 , if it exists, and as ν1 otherwise. Moreover, given two nodes νi , νj ∈ π, with i, j < h, we say that they agree if νi+1 = (νi ) (resp., νi+1 = (νi )) and νj+1 = (νj ) (resp., νj+1 = (νj )), and we denote this situation by A(νi , νj ), and that they disagree (denoted by D(νi , νj )), otherwise. To our purposes, we use the following grammar to generate formulas of M L: ϕ :: = λ | λ ∧ (ϕ ∧ ϕ) | λ → (ϕ → ϕ) | ♦(ϕ ∧ ϕ) | (ϕ → ϕ), where λ ∈ Λ. Definition 5 (implicative formulas). We say that a modal formula ϕ is implicative if it has the form ψ → ξ, or (ψ → ξ), and we denote by Im the set of implicative formulas. As a matter of fact, in order to assign a formula to each leaf, and then to each class, we first associate a formula to every path (see Fig. 2 for an example). Definition 6 (path-, leaf-, and class-formula). Let τ be a decision tree. For each path π = ν0 νh in τ , the path-formula ϕτπ (or, simply, ϕπ , when τ is clear from the context) is defined inductively as: – If h = 0, then ϕπ = . – If h = 1, then ϕπ = s(ν0 , ν1 ). D. Della Monica et al. Fig. 2. On the left-hand side, an example of a modal decision tree; on the right-hand side, all relevant path-, leaf-, and class-formulas (ϕ5 is included in the second group from the top). – If h > 1, let λ = s(ν0 , ν1 ), π1 = ν1 ctr (π), and π2 = ctr (π) νh . Then, ⎧ λ ∧ (ϕπ1 ∧ ϕπ2 ) if λ = ♦ , A(ν0 , ctr (π)), and ϕπ2 ∈ / Im, ⎪ ⎪ ⎪ ⎪ , ctr (π)), and ϕ ∈ Im; or λ = ♦ , D(ν ⎪ 0 π 2 ⎪ ⎪ ⎪ → ϕ ) if λ = ♦ , D(ν , ctr (π)), and ϕ ∈ / Im, λ → (ϕ ⎪ π π 0 π 1 2 2 ⎪ ⎨ or λ = ♦ , A(ν0 , ctr (π)), and ϕπ2 ∈ Im; ϕπ = if λ = ♦ , A(ν0 , ctr (π)), and ϕπ2 ∈ / Im, ♦(ϕπ1 ∧ ϕπ2 ) ⎪ ⎪ ⎪ ⎪ , ctr (π)), and ϕ ∈ Im; or λ = ♦ , D(ν ⎪ 0 π 2 ⎪ ⎪ ⎪ → ϕ ) if λ = ♦ , D(ν , ctr (π)), and ϕ ∈ / Im, (ϕ ⎪ π π 0 π 1 2 2 ⎪ ⎩ or λ = ♦ , A(ν0 , ctr (π)), and ϕπ2 ∈ Im. Then, for each leaf ∈ V , the leaf-formula ϕτ (or, simply ϕ , when τ is clear from the context) is defined as: ϕπ . ϕ = π∈prefix (π ) Finally, for each class c, the class-formula ϕτc (or, simply, ϕc , when τ is clear from the context), is defined as: ϕπ . ϕc = ∈leaves(c) Definition 7 (run). Let τ = (V, E, b, l, s) be a modal decision tree, ν a node in τ , and I an instance in a modal dataset I. Then, the run of τ on I from ν, denoted τ (I, ν), is defined as: Modal Decision Trees (ν)) if I ϕπ if ν ∈ V (ν)) if I ϕπ ⎧ ⎪ ⎨ l(ν) τ (I, ν) = τ (I, ⎪ ⎩ τ (I, The run of τ on I (or the class assigned to I by τ ), denoted τ (I), is defined as τ (I, root(τ )). Following the above definition, a modal decision tree classifies an instance using its class-formulas, and does so by checking, progressively, the path-formulas that contribute to build a leaf-formula, which, in turn, is one of the disjuncts that take part in a class-formula. Observe that, inter alia, this implies that propositional decision trees can be seen as particular cases of modal decision trees even from a semantic point of view: formulas of the type ϕ1 ∧ ϕ2 behave exactly as in the propositional case, while those of the type ϕ1 → ϕ2 , are such that their the antecedent is always included as a conjunct in their corresponding leaf-formula, effectively reducing it to a conjunction, as in the propositional case. Now, on the one side, the efficiency of classification depends on how leafformulas are checked, while on the other side correctness and completeness depend on their semantics. Let us start by evaluating the efficiency of modal decision trees. Definition 8 (efficiency). We say that a decision tree τ of height h is efficient if and only if, for every dataset I and every instance I ∈ I, it is the case that its run τ (I) can be computed in polynomial time with respect to h and to the size of I. A family of decision trees is efficient if and only all of its decision trees are efficient. The following result holds due to the fact that model checking an M L formula against a Kripke structure can be done in polynomial time in the sizes of the structure and the formula [6], and the fact that the size of the formula associated to a path is linear in the length of the path itself. Theorem 1 (efficiency of MDT ). The family MDT is efficient. Now, we want to prove that modal decision trees are correct. Definition 9 (correctness). We say that a decision tree τ is correct if and only if, for every dataset I and every instance I ∈ I, it is the case that I satisfies exactly one of its class-formulas ϕc . A family of decision trees is correct if and only all of its decision trees are correct. The following lemma can be proved by induction on the lengths of the paths, and the correctness of MDT follows. Lemma 1. Let τ be a modal decision tree, and let π1 = ν0 νh−1 · (νh−1 ) and π2 = ν0 νh−1 · (νh−1 ) be two paths. Then, ϕπ1 ↔ ¬ϕπ2 is valid. Theorem 2 (correctness of MDT ). The family MDT is correct. D. Della Monica et al. Fig. 3. Typical presentation of an implicit temporal data set. Finally, we discuss the completeness of modal decision trees with respect to modal logic. Definition 10 (completeness). We say that a family of decision trees is strong- ly complete for a logical formalism if and only if, for each of its formula ϕ, there is a decision tree τ and a class c ∈ C such that ϕc ↔ ϕ is valid. We also say that a family of decision trees is weakly complete for a logical formalisms if and only if, for each of its formula ϕ, there is a decision tree τ and two classes c, c¯ ∈ C such that ϕc → ϕ and ϕc¯ → ¬ϕ are both valid. Modal decision trees are strongly complete with respect to propositional logic by definition, and weakly complete with respect to modal logic. Lemma 2. Let ϕ ∈ ML. Then, there exists a modal decision tree τ and two leaves c , c¯ ∈ V such that ϕπc ↔ ϕ and ϕπc¯ ↔ ¬ϕ are both valid. Theorem 3 (completeness of MDT ). The family MDT is strongly complete for P L and weakly complete for M L. To show the potential of modal symbolic learning, in this section we consider two representative learning situations: from temporal data and from spatial data. As we have observed, spatial/temporal datasets can be seen as modal ones, and modal logic can be declined into suitable spatial/temporal logics that are able to describe such data. An example of dimensional dataset in the temporal case is given in Fig. 3 (left); here, m instances are described by several attributes, each one of which takes value on each of the time points that contribute to the description. Thus, this is a set of multi-variate time series; examples of real Modal Decision Trees Table 1. Test results: propositional versus modal learning from the public, 1dimensional data set NATOPS (left), and from the public, 2-dimensional data set INDIAN PINES. Performances are reported in percentage points. Temporal Seed Propositional 79.17 83.33 80.56 77.78 84.72 77.78 83.33 80.56 80.56 75.00 79.17 83.33 80.56 77.78 84.72 77.78 83.33 80.56 80.56 75.00 95.83 96.67 96.11 95.56 96.94 95.56 96.67 96.11 96.11 95.00 88.89 88.89 93.06 91.67 91.67 88.89 84.72 91.67 84.72 87.50 88.89 88.89 93.06 91.67 91.67 88.89 84.72 91.67 84.72 87.50 97.78 97.78 98.61 98.33 98.33 97.78 96.94 98.33 96.94 97.50 59.58 62.50 63.75 62.50 62.92 57.08 71.25 62.92 58.75 62.92 59.58 62.50 63.75 62.50 62.92 57.08 71.25 62.92 58.75 62.92 96.33 96.59 96.70 96.59 96.63 96.10 97.39 96.63 96.25 96.63 79.58 79.58 67.92 79.58 75.83 71.25 80.00 75.83 77.08 79.58 79.58 79.58 67.92 79.58 75.83 71.25 80.00 75.83 77.08 79.58 98.14 98.14 97.08 98.14 97.80 97.39 98.18 97.80 97.92 98.14 80.27 80.27 96.05 89.16 89.16 97.83 62.42 62.42 96.58 76.62 76.62 97.87 situations that can be described by sets of multi-variate time series range from hospitalized patients that are constantly monitored, to different sport activities described by the values of wearable sensors, to industrial machines whose behaviour is recorded over time. In many such situations, the relevant information is not necessarily visible at time points, but rather at time intervals, and in many cases the information to be extracted is concerned with prolonged events that take place at the same, or overlapping, or separate times, which is, again, a situation that is more naturally described with intervals rather than points. One way to extract such information is considering the multi-variate time series that corresponds to each instance, as in Fig. 3 (right), and each interval that can be built on it. Each such interval is regarded as a world, as in Fig. 3 (right), and worlds are connected through interval-interval relations. Taking the standard approach to do so results in having 12 interval-interval relations, excluding equality, that is meets (RA ), overlaps (RO ), begins (RB ), ends (RE ), during (RD ), and later (RL ); in turn, these give rise to a multi-modal logic which is known as HS (from the authors that first introduced it, Halpern and Shoham [7]) which we can use to extract knowledge from a single-dimensional dataset. In Fig. 3 (right), we have shown the relation overlaps by way of example. In the spatial case, we can generalize both the definition of world and the relations between worlds, and devise a 2-dimensional version of HS, in order to apply the same idea. We performed a simple classification experiment on two public datasets, using a prototype, simple version of MDT (available at [12]); besides being publicly available, the chosen datasets have been selected taking into account their num- D. Della Monica et al. ber of attributes and instances, and their representativeness for temporal and spatial problems. The first dataset is temporal, and known as NATOPS. It contains data generated by sensors on the hands, elbows, wrists and thumbs, in all three coordinates, along the temporal axis, of subjects performing several repetitions of aircraft hand signals, chosen among the 24 most often used ones; the problem consists in recognizing the specific signal. The second one is spatial, known as INDIAN PINES, and contains an hyperspectral image over a single landscape in Indiana (US) with 145×145 pixels, each represented by 220 spectral reflectance bands, and classified into one or more of sixteen types of crops; the problem is to recognize the type of cultivation in each pixel. While it would be premature to draw any conclusions from a single group of experiments, we can already see the improvement that we can expect to observe stepping from a static to a modal approach in Table 1. The results (accuracy, sensitivity, specificity) marked as modal are compared with those obtained with the same datasets using simple aggregating functions and propositional decision trees In this paper, we have shown how propositional decision trees can be generalized into modal decision trees. To this end, we have first highlighted the desirable properties of a family of decision trees in terms of efficiency of classification and logical properties, with respect to a given logical formalism. Then, we designed a family of efficient decision trees that is correct with respect to modal logic. Application-wise, we have argued that, on the one side, different kinds of data are inherently non-propositional, including dimensional (temporal, spatial, spatial/temporal) data, graph-based data, and textual data, and that, on the other side, the logical formalisms that fit such cases are inherently modal. We considered two specific dimensional cases (a temporal one and a spatial one), and executed a learning experiment comparing the performances of propositional and modal decision trees on the same problem and under the same conditions. Temporal and spatial learning have been deeply studied in the machine learning literature; our purpose here is not that of comparing the performances of learning models in absolute terms, but to show the improvement that we can expect from introducing modal logic in symbolic learning schemata. The current implementation of modal decision trees is simpler than the one presented in this paper. The problem of devising an efficient implementation of a learning algorithm that extracts full modal decision trees is still open. While the problem of extracting the optimal decision tree is knowingly NP-hard already at the propositional level, much work has been done on approximation algorithms; adapting such algorithms to this proposal, and studying their computational complexity, is an open issue as well. Finally, decision trees are not the only symbolic learning classification method that can be generalized from the propositional to the modal case; the same can be done, at least, with rule-based systems and ensembles of trees, giving rise to what could be called modal symbolic learning. Modal Decision Trees References 1. Aiello, M., van Benthem, J.: A modal walk through space. J. Appl. Non-Class. Log. 12(3–4), 319–364 (2002) 2. Belson, W.A.: A technique for studying the effects of television broadcast. J. Roy. Stat. Soc. Ser. C 5(3), 195–202 (1956) 3. Blackburn, P., de Rijke, M., Venema, Y.: Modal Logic. Cambridge University Press, Cambridge (2001) 4. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth Publishing Company, New York (1984) 5. Clarke, E.M., Emerson, E.A.: Design and synthesis of synchronization skeletons using branching time temporal logic. In: Kozen, D. (eds.) Logics of Programs. Logic of Programs 1981. LNCS, vol. 131, pp. 52–71. Springer, Heidelberg (1982). https://doi.org/10.1007/BFb0025774 6. Clarke, E.M., Grumberg, O., Peled, D.A.: Model Checking. MIT Press, Cambridge (2001) 7. Halpern, J.Y., Shoham, Y.: A propositional modal logic of time intervals. J. ACM 38(4), 935–962 (1991) 8. Kass, G.V.: An exploratory technique for investigating large quantities of categorical data. J. Roy. Stat. Soc. Ser. C 29(2), 119–127 (1980) 9. Lutz, C., Wolter, F.: Modal logics of topological relations. Log. Methods Comput. Sci. 2(2), 1–41 (2006) 10. Messenger, R., Mandell, L.: A modal search technique for predictive nominal scale multivariate analysis. J. Am. Stat. Assoc. 67(340), 768–772 (1972). https://doi. org/ 10.1080/01621459.1972.10481290 11. Morgan, J.N., Sonquist, J.A.: Problems in the analysis of survey data, and a proposal. J. Am. Stat. Assoc. 58(302), 415–434 (1963). https://doi.org/10.2307/ 2283276 12. Pagliarini, G., Manzella, F., Sciavicco, G., Stan, I.E.: ModalDecisionTrees.jl: interpretable models for native time-series & image classification (v0.80) zenodo (2022). https://doi.org/10.5281/ zenodo.7040420 13. Parliament and Council of the European Union: General data protection regulation (2016). https://gdpr-info.eu/ 14. Pnueli, A.: The temporal logic of programs. In: 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), pp. 46–57. IEEE (1977) 15. Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1, 81–106 (1986). https:// doi.org/10.1007/BF00116251 16. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, Burlington (1993) 17. Quinlan, J.R.: Simplifying decision trees. Int. J. Hum. Comput. Stud. 51(2), 497– 510 (1999) Assisted Process Knowledge Graph Building Using Pre-trained Language Models Patrizio Bellan1,2(B) , Mauro Dragoni1 , and Chiara Ghidini1 1 Fondazione Bruno Kessler, Trento, Italy {pbellan,dragoni,ghidini}@fbk.eu Free University of Bozen-Bolzano, Bolzano, Italy Abstract. The automated construction of knowledge graphs from procedural documents is a challenging research area. Here, the lack of annotated data, as well as raw text repositories describing real-world procedural documents, make it extremely difficult to adopt deep learning approaches. Pre-trained language models have shown promising results concerning the knowledge extraction tasks from the models themselves. Although several works explored this strategy to build knowledge graph, the viability of knowledge base construction by using prompt-based learning strategy from such language models has not yet been investigated deeply. In this work, we present a prompt-based in-context learning strategy to extract, from natural language process descriptions, conceptual information that can be converted into their equivalent knowledge graphs. Such a strategy is performed in a multi-turn dialog fashion. We validate the accuracy of the proposed approach from both quantitative and qualitative perspectives. The results highlight the feasibility of the proposed approach within low-resource scenarios. Keywords: Process extraction from text · In-context learning · Knowledge graph · Pre-trained language model · Business process management The automatic building of knowledge graphs (KGs) from text is a long-standing goal in the Artificial Intelligence (AI) community that opened many challenges within specific research areas, e.g. information extraction (IE), natural language processing (NLP), and knowledge representation and reasoning (KRR). KGs aim to organize raw information with an appropriate structured form by capturing the entities described within the source repositories (represented through nodes) and their relationships (represented through labeled edges). The availability of effective KGs may trigger reasoning tasks to infer unobserved facts from observed evidence, i.e., the nodes and the labeled edges contained within the KGs. The building of such KGs may pass through the analysis of complex and dynamic c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 60–74, 2023. https://doi.org/10.1007/978-3-031-27181-6_5 Process Knowledge Graph Building Using PLM textual information containing both entities and relationships that have to be included within the KG. The construction of KGs by starting from this type of information may be challenging since the relevant entities contained within these texts are common sense terms that within specific contexts can assume relevant conceptual meaning. Recent advances in NLP, like the availability of large pretrained language models (PLMs) and the introduction of novel in-context learning approaches, enable the possibility to mimic few-shot learning techniques without changing any model parameter [6,18]. Conversational information seeking (CIS) systems can be exploited to extract conceptual information from natural language text and to represent such information in a structured form within a KG. These systems are drawing growing attention in both academia and industry. They aim at supporting search and question answering (among other tasks) using multi-turn dialogues. Given the high quality of such language models as potential representations of relational knowledge, an interesting research direction to explore is to support the automatic construction of KGs through the use of PLM to understand how much conceptual and relational knowledge they can extract, how much such knowledge differs from reality, and how it is possible to make them more effective within specific contexts. In this paper, we explore the feasibility of using in-context learning to perform knowledge extraction from procedural documents in a question-and-answer multi-turn dialog fashion. To the best of our knowledge, this task is performed for the first time in the literature. An example of a multi-turn dialog is shown in Fig. 1. A user interacts with a cognitive artificial agent that mimics an expert of a specific domain. The agent guides the knowledge extraction task by answering a set of questions posed incrementally by the user. Then, KG is built on top of the answers generated by the PLM. As a representative scenario, we target the Business Process Management (BPM) area, and in particular the process extraction from text task [4]. This domain is characterized by a limited size of the gold-standard data available which is highly hampering its development. We use the Generative Pre-trained Transformer 3 model (GPT-3) [6] as the artificial agent. We explore different settings of the adopted PLM to perform in-context Fig. 1. In this example of multi-turn dialog, the artificial agent guides the construction of the process knowledge graph by answering the user. P. Bellan et al. learning: (i) no fine-tuning, (ii) by providing conceptual definitions, and (ii) by providing an extremely limited number of examples, i.e., to mimic the few-shot learning technique. Within our use case, we aim to extract entities and relations from procedural descriptions. Related Work The information extraction research area has been widely explored in the literature embracing many domains [14]. Specifically, on the use of PLMs, several works investigated their use aiming to understand both linguistic and semantic properties of possible word representations and also how PLMs can be exploited within specific knowledge and linguistic tasks. Compared to the aims mentioned above, our approach goes against the trend by trying to exploit PLMs to extract and store factual and common-sense knowledge with the aim of constructing KGs automatically. However, the adoption of PLMs has been investigated from several perspectives. A systematic comparison between neural-based and count-based representations has been performed in [1]. Neural-based representations were demonstrated to be more effective with respect to the count-based ones in most of the tested tasks. Hence, given the neural-based nature of PLMs, they may be considered a suitable starting point for this work. Details about the required granularity of these representations have been investigated, instead, in [12]. The capability of PLMs concerning the understanding and, in turn, the generation of sentences grammatically correct has been investigated in [15] and [20]. The former demonstrated how ungrammatical sentences do not affect the understanding capability of PLMs. The latter investigated how PLMs are suitable for being used within different domains and tasks without particular fine-tuning activities. However, even if the flexibility of PLMs is quite high, this work highlighted how it is possible to customize them to obtain more effective PLMs addressing specific tasks or to be used in specific domains. Moreover, it provides little insight into whether these models can compete with traditional approaches to representing knowledge like symbolic knowledge bases. Our work goes in this direction since we intended to verify the feasibility of PLMs in extracting symbolic knowledge from natural language texts. Finally, in [16] the authors introduced a PLM based on transformers which they called generative pre-training (GPT-1). This work evolved in two further versions: GPT-2 [17] and GPT-3 [6]. These PLMs demonstrated their suitability to work within zero-shot environments in several tasks and their ability to store factual knowledge. Moreover, the authors of GPT-3 demonstrated how it is possible to perform fine-tuning operations on the PLM to enhance its effectiveness within specific tasks or domains. Differently from state-of-the-art research, and to the best of our knowledge, this is the first investigation concerning the extraction of conceptual knowledge from text using in-context learning which aims at dealing with an entire textual description without making assumptions regarding the input text. Then, it is Process Knowledge Graph Building Using PLM done in an incremental and flexible conversational fashion to extract the required information via question-and-answer dialogues. Our work is the first attempt to use these models on this specific problem and therefore the results and lessons learned are likely to pave the way to future efforts, possibly involving different strategies, target entities and relations, and also other PLMs. Use Case We introduced in Sect. 1 how the strategy proposed in this paper is agnostic with respect to the domain. Indeed, the use of in-context learning techniques to extract knowledge from natural language text allows one to ask for specific types of information from a PLM without specifying a priori the domain of interest. To demonstrate the feasibility of the proposed solution, we rely on a use case related to the construction of small-size KGs by starting from natural language documents providing descriptions of procedures. This task, known as Process information extraction from text, can be regarded as the specific problem of finding a way to transform process descriptions into structured representations of different expressivity, up to the entire formal process model diagram [3,4]. We choose this task since it is highly hampered by data scarcity issue. We extract entities from raw texts that are relevant to populate the equivalent KG, which could then be refined or used to build graphical models. We exploited a part of the PET dataset [2] to validate our strategy. Such a dataset is the only publicly available annotated gold-standard dataset specific for process extraction from text tasks. It contains 45 texts annotated with process models elements and relations1 . KGs are built by means of a question-answering style that mimic a multi-turn dialog with an expert. In our setup, the GPT-3 model acts as a domain expert. Knowledge Extraction from Text via In-Context Learning This section describes the approach we designed and implemented to perform the extraction of knowledge, both entities and relations, from text via in-context learning. The starting point is a set of the conceptual elements (entities and/or relationships) we aim at extracting, and the first building block of the approach is the formulation of a series of incremental questions posed, e.g., by the user in Fig. 1 from Q1 to Q3, to the GPT-3 model in a sequential manner which enables the extraction of those specific entities and relationships. These questions become the specific entities that GPT-3 has to solve with the help of specific prompts. 1 The description of the dataset, the annotation guidelines, the annotation schema, and the annotation process are out of the scope of this paper and the interested reader can find all the material at https://huggingface.co/datasets/patriziobellan/ PET. P. Bellan et al. In our dialog pipeline, answers to a question can are used as inputs to formulate further questions. For instance, as shown in the figure, firstly we ask for the list of activities described in a text (Q1) and we use the answers (A1) to populate the KG with activity nodes. Then, for each activity, we ask who is the participants performing it (Q2). We use this information (A2) to populate the KG with the participant nodes and the perform relation. Finally, for each activity pair, we ask if they stand in a following relation and we use this information (A3) to complete the KG with activity-activity relations. The overall pipeline supports both options, i.e., the use of gold information to perform each step or the re-use of the output obtained from the previous step to perform the new one. In this work, we focused on the latter strategy since we intend to investigate the capability of constructing a KG from scratch and without the usage of gold information. The second building block of the approach is the construction of the prompt (the input feeds to the model) to perform in-context learning. Prompts are generated starting from templates that are filled using two types of information: (i) contextual knowledge which enable GPT-3 to identify the specific domain at hand and the elements to be extracted for the different tasks; and (ii) few examples of the task at hand. Once ready the prompts are fed into the model in order to generate the answer. The third building block of our approach is the PLM used in the conversation to mimics an expert in the field. As motivated in Sect. 2, we decided to start from GPT-3 [6] since it is one of the state-of-the-art PLM and it can be adopted without fine-tuning it toward a specific goal. Other transformer-like models such as BERT [9] or RoBERTa [13] could not be adopted to perform in-context learning since they usually require specific training (or fine-tuning) toward a down-stream task to exhibit acceptable performances. Real-world scenarios may not be often supported by transformer-like approaches due to low-resource issues. Hence, we have decided to directly start our investigation from GPT-3 and from the notion of in-context learning since it can overcome such an issue. We, therefore, tackle the task in a question-and-answering fashion, not as an information extraction task. We used the answers generated by the model to build the knowledge graph of the process described in a text. Needless to say, this first investigation into the usage of PLMs for process extraction from text does not aim at saying the final word on this topic but, on the contrary, it aims at opening up the possibility to better investigate and exploit this kind of approach in the future. 4.1 In-Context Learning PLMs, such as GPT-3 [6] or BERT [9], are built by using an impressive amount of data and exploiting the advances of deep learning engineering and computational power [6,18]. PLMs are becoming a hot topic in NLP as they can be adopted and fine-tuned, to solve complex tasks in different domains, such as open question answering in prototypical common-sense reasoning [5]. While the fine-tuning of PLMs for task-specific applications has become standard practice in NLP in the last few years, the advent of GPT-3 greatly changed this paradigm. This model Process Knowledge Graph Building Using PLM opens the possibility of injecting task-specific knowledge without doing a “classical” fine-tuning of the model parameters toward a specific downstream task. The model uses the knowledge provided in input to refine the reasoning capabilities toward the task to solve. This technique is called in-context learning, where contextual knowledge, task instruction, very few examples of how to solve the task, and the actual input data are given as input all together in a prompt. The prompt is then sent as input into the model. This approach has been shown to be extremely useful to address the lowresource issue [19] and has been used to address topics ranging from medical dialogue summarization [8] to hate speech detection [11]. We illustrate the notion of prompt by showing an abstract example in Fig. 2. The prompts we used in our experiments are customization of this prompt. Fig. 2. Abstract example of prompt adopted in our experiment to do in-context learning. Lines 1–3 describe the contextual knowledge component and provide the model with contextual information that is used to narrow the model’s reasoning capability to the specific context at hand (Business Process Management, in our case). This knowledge can help the model to disambiguate the different meanings of a word (e.g., activity). In our example, they are composed of the identification of the domain (Business Process Management) and definitions of them. Lines 4–7 describe examples component and provide examples of the task to be solved together with the solution. It is composed of three parts containing: (i) a textual example [line 4], (ii) the task instructions to be performed upon the text [line 6], and (iii) the correct answer(s) [line 7]. In the sample prompt, we included only one example. Lines 8–10 describe the task instructions component and provide the task instructions describing the actual problem to be solved [line 10] and the process description in input [line 9] where the task has to be performed upon. Finally, line 11 is an eliciting answer mark that tells the model that the prompt is ended and to start producing an answer. At inference time, the prompt is the input feed into the model. P. Bellan et al. Implementing the Approach While the overall approach presented here does not depend upon the particular process elements we extract, in this paper, we decided to use it for the extraction of activities, participants (that is, actors in this context), the performing relation between a participant and the activity(ies) it performs, and the sequence relation between activities (hereafter directly Fig. 3. The entities and relations contained follow relation). We focus on these in the KG of document 3.3 of the PET four elements as they constitute the dataset. Green circles represent the activ- basic building blocks of any business ities. Orange circles represent the participrocess model, they enable the conpants. Blue arrows represent the directly folstruction of a structured representalow relations. Orange arrows represent the tion such as the one represented in performing relations. (Color figure online) Fig. 3 and were therefore deemed an appropriate starting point for the empirical investigation of a new approach. The graph shown in Fig. 3 is related to the KG representing the procedure described in the document doc-3.3 of the PET dataset. The questions used to extract activities, participants, the performing relation, and the directly follow relation from text are reported in Fig. 4. As we can see in Fig. 1, these questions are performed incrementally: first, we ask questions about the process activities (Q1), then we enrich the activities with the participants performing them (Q2), and finally, we ask about the precedence relation among activities (Q3). Also, question Q2 is used to retrieve both the participant and the performing relationship between activities. The incremental order of the questions is interesting because it mimics the way we often build conceptual models using follow-up questions. This first work does not aim at investigating this aspect in depth. We are aware that there is a growing literature corpus on prompt-based fine-tuning, as described, e.g. in [10]. But, an investigation into the most efficient prompt is out of scope for this first paper. Fig. 4. The questions adopted as task instructions in prompts. Our in-context learning approach exploits two sources of information: contextual knowledge and few examples related to the task at hand. For this specific Process Knowledge Graph Building Using PLM paper contextual knowledge consists in the text in Fig. 5: (i) a preamble identifying the business process management (BPM) context and the definitions of process elements to be extracted. Considering the context of Business Process Management and process modeling and the following definitions: Activity: An activity is a unit of work that can be performed by an individual or a group. It is a specific step in the process. Participant: A participant is any individual or entity that participates in a business process. This could include individuals who initiate the process, those who respond to it, or those who are affected by it. Process Model: A process model is a model of a process in terms of process activities and their sequence flow relations. Flow: A flow object captures the execution flow among the process activities. It is a directional connector between activities in a Process. It defines the activitieséxecution order. Sequence Flow: A Sequence Flow object defines a fixed sequential relation between two activities. Each Flow has only one source and only one target. The direction of the flow (from source to target) determines the execution order between two Activities. A sequence relation is an ordered temporal relation between a source activity and the activity that immediately follows it in the process model. Fig. 5. Contextual knowledge provided in prompts. Empirical Assessment We provide below the procedure adopted to evaluate the proposed approach. We start by better specifying the tasks to be solved, then by describing the experimental settings provided by the different prompts, then the dataset used for the evaluation, and finally the obtained results. Also, even if we automatically extract the target elements (activities, participants, and relations) from the GPT-3 answers, we manually validated them all. We performed the experiments by adopting the text-davinci-001 engine and set all the other model’s parameters (e.g., sampling temperature) to 0.0. We want to remark here that the comparison among different model configurations is postponed to future investigation since they are out of the scope of this paper. We performed a quantitative evaluation by applying the Graph Edit Distance (GED) [7] metric to compare the KGs created by using the gold standard annotations with the ones generated by using the information extracted from the GPT-3 PLM. Then, we provided a qualitative evaluation in which we analyze, by starting from a representative example extracted from our dataset, the main pros and cons, concerning our use case, about the usage of a PLM for automatically building a KG2 . The reader may find all the material of this research at https://pdi.fbk.eu/pet/aixia/ aixia2022 material.zip. P. Bellan et al. The Tasks The overall task we are assessing is the generation of the KGs by starting from procedural documents. We designed a multi-turn dialog pipeline in which each interaction provides KG information about the nodes and the edges of the graph to obtain a process representation similar to the one proposed in Fig. 3. In order to get the information required to build the KG, our dialog pipeline addresses two categories of sub-tasks: process elements extraction (activities and participants) and relations extraction (participant performer and activities relations). In the Activity extraction sub-task we customized prompt-templates task instructions with question Q1. We performed the extraction of Process participants together with the Performs relation extraction sub-task by customizing the prompt-templates task instructions with question Q2. Finally, the Follows relation sub-task compared pairs of activities to assess for each pair if they stand in sequential order. We customized prompt-templates task instructions with question Q3, by completing the instructions with a pair of activities at a time. We are aware that the extraction of Participants, and the relations Follow and Performs is influenced by the quality of the extraction of the activities. We want to remark here that we evaluated the proposed approach by comparing extracted graphs with the gold standard ones. In our experiments, we did not take into account the comparison between the accuracy of extracting such relations by using the gold annotations provided in the PET dataset. Instead, we measure the ability of the system to extract these three elements on the basis of the activities extracted by Q1, thus measuring the effective quality of the incremental question-answering interaction. 5.2 Experimental Setting We evaluated the proposed approach with four experimental settings. Here we adopt the terminology described in Sect. 4 to explain our experimental settings. In the Raw setting the GPT-3 model has been used as it is provided by the maintainers without any customization. We created this setting by providing task instructions and process description text only to the prompt template. This setting works as a baseline to observe the capability of the native model of working within complex scenarios. We built the second experimental setting, called Defs, on top of the Raw setting. We customized the prompt template by adding contextual knowledge to narrow the model’s reasoning ability. The contextual knowledge we provided is composed of the contextual information and the definition proposed in Fig. 5. The aim was to inject domain-specific conceptual knowledge into the language model to observe the capability of the system to exploit the basic domain knowledge. The third setting, called 2Shots, was built on top of the Raw setting by adding the examples component. In our experiments, we used the gold standard annotations provided by the documents 2.2 and 10.9 of the PET dataset. Here, for the extraction of Activity and Participant only the annotations related to Process Knowledge Graph Building Using PLM activities, activity data, and participants have been provided. While, for the extraction of Follows and Performs relationships, only the annotations related to sequence flow and performing have been provided. This strategy has been adopted to avoid the injection of non-essential information that may cause noise in the model. Finally, in the Defs+2Shots setting we use both strategies described above. We enhanced the Defs setting with the examples component. Fig. 6. The entities and relations contained in the KG of document 3.3 extracted using the Raw prompt. Fig. 7. The entities and relations contained in the KG of document 3.3 extracted using the Defs prompt. Fig. 8. The entities and relations contained in the KG of document 3.3 extracted using the 2Shots prompt Here, false positive Follows relationships have been omitted for readability purposes. Fig. 9. The entities and relations contained in the KG of document 3.3 extracted by using the Defs+2Shots prompt. Here, false positive Follows relationships have been omitted for readability purposes Test Dataset We selected 7 representative documents from the PET dataset to empirically evaluate our approach. Since the dataset is annotated with process elements and process relations, we manually constructed the gold standard graph of each test text. Table 1 reports the overall statistics of the selected documents in terms of the number of words, annotated activities, participants, and performs and follows relations. P. Bellan et al. Fig. 10. The set of false positive Follows relations contained in the KG of document 3.3 extracted using the 2Shots prompt. Fig. 11. The set of false positive Follows relations contained in the KG of document 3.3 extracted using the Defs+2Shots prompt. We are aware that the analysis of seven documents has limitations from a statistical significance perspective. However, the rationale behind Text word# activity# participant# follow# perform# this empirical evaluation is two-fold. doc-1.2 100 10 2 10 10 doc-1.3 162 11 5 11 12 First, since this is a first observational doc-3.3 71 7 2 6 4 study of a promising groundbreaking doc-5.2 83 7 3 6 4 strategy, we decided to select docudoc-10.1 29 4 2 4 4 ments having specific characteristics doc-10.6 30 4 2 4 4 doc-10.13 39 3 2 2 3 in order to perform an ad-hoc analysis of how the pre-trained language model worked on them. Second, the application of the proposed approach passed through several refinement rounds before being tested since we had to understand how the pre-trained language model actually works. Hence, to better understand the impact of the information provided by us to enrich the pre-trained language model, the most suitable way was to observe such behaviors on a small but characteristic subset of documents. Table 1. Characteristics of test set documents. Quantitative Evaluation Table 2 provides the results of the empirical assessment performed. The table reports the GED measures obtained by comparing the gold standard graph with the graphs generated by each of the experimental settings. Such a measure represents the minimum amount of edit operations required to transform the gold standard graph into the generated one. The higher the measured value, the higher the difference between the two generated KGs. Hence, a low measure value means that the two KGs are similar. In general, Raw and Defs settings registered higher GED values with respect to Defs+2Shots and 2Shots ones. Nevertheless, the results highlighted a few interesting patterns. On average, the Raw setting registered the highest GED values. This result highlights the inability of the raw PLM to extract informa- Process Knowledge Graph Building Using PLM tion that is useful for the construction of the final KG. For instance, as shown in Fig. 6, when tested with the document 3.3 this prompt was able to extract only some activities, but no relations at all. For what concern participants, it was not able to address this extraction properly. Similarly, the Defs setting suffers from the same drawback. This is proven by the same GED value in both settings obtained in several cases with the consequence of producing very similar graph representations. Indeed, considering Fig. 7, this customization was able to extract the activities described but failed completely to extract their relations. Also, it over-generated the participant elements and created many false-positive performer relations. An exception is given by the doc-5.2 where the Defs setting outperformed the other settings. Here, the conservative strategy (i.e., to not fine-tune the model with annotated procedural documents) adopted in both the Raw and Defs settings produced slightly better results than the Defs+2Shots and 2Shots ones. Among the Defs+2Shots and 2Shots, the latter demonstrated to be the most effective one. Indeed, in several cases, e.g. doc-10.6, the 2Shots setting produced a KG similar to the gold standard one. Comparing these two prompts for example on document 3.3 as shown in Fig. 9 and Fig. 8, both prompts are able to detect the activities and the participants described in the text. However, the Defs+2Shots prompt generated many false-positive performer relations. Two interesting trends are worthy to discuss. First, the length of a text seems to be not related to the GED value obtained by each setting. This is an interesting aspect since it opens the hypothesis that the effectiveness of the model is independent of the length of a text. Future work will focus also on a deeper investigation of this aspect. Instead, the second interesting aspect is related to the impact of the few-shot strategy within the in-context learning approach. Here, we can observe the results by splitting the GED values observed with the Raw and Defs settings and with the 2Shots and Defs+2Shots ones. It is interesting to observe how the effectiveness of the first two settings is, generally speaking, the opposite of the other two. An example is given by the doc-3.3 document where, unexpectedly, the few-shot strategy over-produced incorrect Follows relations between activities causing higher GED values a s shown in Fig. 11 and Fig. 10. Finally, we may state that the 2Shots and Defs+2Shots settings registered an important effectiveness demonstrating the viability of a few-shot approach integrated within an in-context learning strategy. They performed well concerning the extraction of process elements from the natural language description, even if they are inclined to generate several false Follows relations between activities. Qualitative Analysis The quantitative analysis conducted provided some preliminary insights about the actual performance of PLMs within the adopted four experimental settings. By analyzing the GED values from a qualitative perspective, and by taking into account also the different types of process workflow described in the textual documents we considered, we can highlight some further considerations. P. Bellan et al. First, both the Raw and Defs settings obtained very low effectiveness in the extraction of both process elements and relations. The GED values obtained were very close to their upper bounds, i.e., all extracted elements were wrong or they are not able to produce any results. Hence, we may state that these two settings are not good candidates for extracting process elements from a natural language text in a correct way. On the one side, we may conclude that, by observing the Raw strategy, in most cases, it fails concerning the extraction of all elements. This is an important point of attention because it demonstrated that PLMs per se might not be able to support the knowledge extraction task without the adoption of a fine-tuning strategy. On the other side, the Defs setting improves the Raw one a little bit, especially concerning the identification of activities. However, it demonstrated to be inadequate concerning the detection of temporal relations among activities, i.e. the Follows relation. Second, we have already shown Table 2. Graph edit distance scores how both the Defs+2Shots and results. 2Shots settings demonstrated their effectiveness by demonstrating the Text ID Raw Defs Defs+2Shots 2Shots viability of a few-shot approach intedoc-1.2 31.0 33.0 13.0 9.0 grated within an in-context learning doc-1.3 20.0 32.0 42.0 39.0 doc-3.3 12.0 14.0 30.0 17.0 strategy. However, some issues were doc-5.2 30.0 12.0 22.0 21.0 highlighted concerning the extraction doc-10.1 19.0 19.0 4.0 6.0 of relations among activities. Indeed, doc-10.6 19.0 19.0 4.0 2.0 the Defs+2Shots setting obtained doc-10.13 15.0 15.0 13.0 5.0 good results in finding the activities Average 21.0 18.7 11.2 7.5 themselves, but it often fails about detecting the appropriate relations between them. On the one hand, it finds the actually existing ones. On the other hand, it finds a lot of Follows relations that are not mentioned in the original text. The trend of obtaining many incorrect relations between activities led, obviously, to higher GED values. Overall, the 2Shots demonstrated to be more balanced since (i) it was able to find all the process elements described in the text; and, (ii) it did not add too many nonexisting relations, especially the Follows ones. This is an important insight because, while the use of domain-specific definitions is, anyway, useful to improve the overall effectiveness of the extraction process, it is important to dedicate effort to detecting which may be the most appropriate definitions, e.g., not overgeneralized ones. The detection of which definitions are the most appropriate ones to instruct the model is not trivial. The PLM model may provide different semantic meanings for the same words. Hence, it is crucial to support its disambiguation capability to inject into its correct knowledge. Finally, by analyzing the process workflow contained within the natural language documents adopted, we may state that the detection of split points and parallel branches is challenging. Indeed, we observed that, in general, split points are difficult to be interpreted by the PLM given the necessity of taking into account a larger portion of text. Process Knowledge Graph Building Using PLM In this paper, we explored the feasibility of leveraging PLMs and in-context learning approach to automatically build KGs from textual documents in a questionanswering multi-turn dialog incremental manner. The results highlighted the feasibility of the in-context learning approach when deep learning based NLP techniques are used within low-resource scenarios. The results show the feasibility of our proposed methodology. This opens the possibility to use this technique to address the construction of KGs by starting from natural language text in scenarios where it is necessary to manage the low-resources issues and by exploiting the human-in-the-loop paradigm given the role of the domain expert in processing the information provided by the model. We also reported a suite of lessons learned from this experience that will drive the development of further research. References 1. Baroni, M., Dinu, G., Kruszewski, G.: Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), vol. 1, pp. 238–247. The Association for Computer Linguistics (2014) 2. Bellan, P., van der Aa, H., Dragoni, M., Ghidini, C., Ponzetto, S.P.: PET: an annotated dataset for process extraction from natural language text tasks. In: Cabanillas, C., Garmann-Johnsen, N.F., Koschmider, A. (eds.) Business Process Management Workshops (BPM 2022). LNBIP, vol. 460, pp. 315–321. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-25383-6 23 3. Bellan, P., Dragoni, M., Ghidini, C.: A qualitative analysis of the state of the art in process extraction from text. In: Proceedings of the AIxIA 2020 Discussion Papers Workshop Co-located with the the 19th International Conference of the Italian Association for Artificial Intelligence (AIxIA2020), Anywhere, 27th November 2020. CEUR Workshop Proceedings, vol. 2776, pp. 19–30. CEUR-WS.org (2020) 4. Bellan, P., Dragoni, M., Ghidini, C.: Process extraction from text: state of the art and challenges for the future. arXiv preprint arXiv:2110.03754 (2021) 5. Boratko, M., Li, X., O’Gorman, T., Das, R., Le, D., McCallum, A.: ProtoQA: a question answering dataset for prototypical common-sense reasoning. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pp. 1122–1136. ACL (2020) 6. Brown, T.B., et al.: Language models are few-shot learners. In: Annual Conference. on Neural Information Processing Systems (NeurIPS) (2020) 7. Bunke, H.: On a relation between graph edit distance and maximum common subgraph. Pattern Recognit. Lett. 18(8), 689–694 (1997) 8. Chintagunta, B., Katariya, N., Amatriain, X., Kannan, A.: Medically aware GPT-3 as a data generator for medical dialogue summarization. In: Proceedings of the 6th Machine Learning for Healthcare Conference Proceedings of the Machine Learning Research, vol. 149, pp. 354–372. PMLR (2021) 9. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the NAACLHLT 2019, vol. 1, pp. 4171–4186. ACL (2019) P. Bellan et al. 10. Gao, T., Fisch, A., Chen, D.: Making pre-trained language models better few-shot learners. In: Proceedings of the ACL/IJCNLP 2021, pp. 3816–3830. ACL (2021). https://doi.org/10.18653/v1/ 2021.acl-long.295 11. Gupta, S.: Hate speech detection using OpenAI and GPT-3. Int. J. Emerging Technol. Adv. Eng. (2022) 12. Hill, F., Reichart, R., Korhonen, A.: SimLex-999: evaluating semantic models with (genuine) similarity estimation. Comput. Linguist. 41(4), 665–695 (2015) 13. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019) 14. Mart´ınez-Rodr´ıguez, J., Hogan, A., L´ opez-Ar´evalo, I.: Information extraction meets the semantic web: a survey. Semantic Web 11(2), 255–335 (2020) 15. Marvin, R., Linzen, T.: Targeted syntactic evaluation of language models. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1192–1202. Association for Computational Linguistics (2018) 16. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training. OpenAI Blog (2018) 17. Radford, A., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019) 18. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(1), 5485–5551 (2020) 19. Scao, T.L., Rush, A.M.: How many data points is a prompt worth? In: Proceedings of the NAACL-HLT 2021, pp. 2627–2636. ACL (2021) 20. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S.R.: GLUE: a multi-task benchmark and analysis platform for natural language understanding. In: Linzen, T., Chrupala, G., Alishahi, A. (eds.) Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2018, pp. 353–355. ACL (2018) Neural Networks Reduction via Lumping Dalila Ressi1(B) , Riccardo Romanello1 , Carla Piazza1 , and Sabina Rossi2 1 Università di Udine, Udine, Italy {dalila.ressi,riccardo.romanello,carla.piazza} @uniud.it 2 Università Ca’ Foscari Venezia, Venice, Italy [emailprotected] Abstract. The increasing size of recently proposed Neural Networks makes it hard to implement them on embedded devices, where memory, battery and computational power are a non-trivial bottleneck. For this reason during the last years network compression literature has been thriving and a large number of solutions has been published to reduce both the number of operations and the parameters involved with the models. Unfortunately, most of these reducing techniques are actually heuristic methods and usually require at least one re-training step to recover the accuracy. The need of procedures for model reduction is well-known also in the fields of Verification and Performances Evaluation, where large efforts have been devoted to the definition of quotients that preserve the observable underlying behaviour. In this paper we try to bridge the gap between the most popular and very effective network reduction strategies and formal notions, such as lumpability, introduced for verification and evaluation of Markov Chains. Elaborating on lumpability we propose a pruning approach that reduces the number of neurons in a network without using any data or finetuning, while completely preserving the exact behaviour. Relaxing the constraints on the exact definition of the quotienting method we can give a formal explanation of some of the most common reduction techniques. Keywords: Neural networks · Compression · Pruning · Lumpability Since 2012, when AlexNet [29] won the famous ImageNet Large Scale Visual Recognition Challenge (ILSVRC), the number of proposed Artificial Neural Network (ANN or NN ) architectures has increased exponentially. Their intrinsic flexibility, together with the superior performance they can achieve, made neural networks the tool of choice to solve a wide variety of tasks. As these models have evolved to process large amount of data or to solve complicated tasks, their complexity has also increased at same pace [12]. Such elaborate and deep networks are the foundation of Deep Learning (DL) and they stand out both for the large number of layers they are made of and for the higher level of accuracy they can reach on difficult tasks [56]. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 75–90, 2023. https://doi.org/10.1007/978-3-031-27181-6_6 D. Ressi et al. While the academic community mostly focused their efforts in training large and deep models [9,28,57], being able to adopt such networks in embedded devices resulted to be a problem. Physical constraints such as battery, memory and computational power greatly limit both the number of parameters used to the define the architecture and the number of Floating Point Operations (FLOPs) required to be computed at inference time. A commonly used strategy to address this problem is called Network Compression. Compression literature has had a substantial growth during the last years, and for this reason there are many different ways to group together methods reducing a model in similar ways. Methods focusing on finding the best possible structure to solve a particular tasks can be grouped together as Architecture-related strategies. These kind of methods usually require to train the network from scratch each time the structure is modified. In particular, Neural Architecture Search (NAS) techniques aim to find the best possible architecture for a certain task with minimal human intervention [14,35,44]. This is usually made possible by modelling the search as an optimization problem and applying Reinforcement Learning (LR)-based methods to find the best architecture [3,60]. In this group we can also find Tensor Decomposition, where matrix decomposition/factorization principles are applied to the d-dimensional tensors in neural networks. Tensor decomposition generalizes the widely used Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) to an arbitrary number of dimensions [7,19,54]. The goal of these techniques is to reduce the rank of tensors in order to efficiently decompose them into smaller ones and drastically reduce the number of operations [12]. As the rank of a tensor is usually far from being small, the most common solutions are to either to force the network to learn filters with small rank either to use an approximated decomposition [13]. Using a similar approach Lightweight or Compact Networks focus on modifying the design of the architecture such that it performs less operations while maintaining the same capability. It is the case of the MobileNet series [23,24,46], ShuffleNet series [37,59], and EfficientNet series [52,53]. They exploit the idea of using 1 × 1 filters introduced by Network in Network [32] and GoogLeNet [49,50] in their inception modules. A similar concept is explored by the SqueezeNet [26] architecture in their Fire module, where they substitute the classical convolutional layers such that they can achieve the same accuracy of AlexNet on ImageNet dataset but with a model 510 times smaller. A different methodology consists in training a big model from the start, and then Pruning superfluous parameters. In particular, Weight Pruning consists in zeroing connections or parameters already close to zero [30], but more elaborated methods can also take into consideration the impact of the single weights on the final results [18]. Even if weight pruning is a very powerful tool to reduce the network parameters [15], its major drawback is that it does not actually reduce the number of FLOPs at inference time. A more effective solution consists instead in skipping completely some of the operations. It is the case of Filter Pruning, where whole nodes or filters (in case of convolutional layers) are removed from the architecture. Pruning usually NNs Reduction via Lumping requires some degree of re-training to recover the lost accuracy due to the reduced network capability, but an interesting phenomena that happens in the early stages of pruning is that most of the times the test accuracy actually increases, due to the regularization effect that pruning unnecessary parameters has on the network. While weight pruning allows more control on what parameters to remove, filter pruning is usually the best solution compression-wise as it allows to drastically reduce the network parameters such that the models can be actually implemented in small embedded devices [45]. Another technique often used in conjunction with pruning is called quantization [17]. While pruning aims to reduce the number of parameters, quantization instead targets their precision. As the weights are usually represented by floating point numbers, it is possible to reduce the bits used for the number representation down to single bits [43], without affecting the network accuracy. In the context of performance evaluation of computer systems, stochastic models whose underlying stochastic processes are Markov chains, play a key role providing a sound high-level framework for the analysis of software and hardware architectures. Although the use of high-level modelling formalism greatly simplifies the specification of quantitative models (e.g., by exploiting the compositionality properties [21]), the stochastic process underlying even a very compact model may have a number of states that makes its analysis a difficult, sometimes computationally impossible, task. In order to study models with a large state space without using approximations or resorting to simulations, one can attempt to reduce the state space of the underlying Markov chain by aggregating states with equivalent behaviours. Lumpability is an aggregation technique used to cope with the state space explosion problem inherent to the computation of the stationary performance indices of large stochastic models. The lumpability method turns out to be useful on Markov chains exhibiting some structural regularity. Moreover, it allows one to efficiently compute the exact values of the performance indices when the model is actually lumpable. In the literature, several notions of lumping have been introduced: ordinary and weak lumping [27], exact lumping [47], and strict lumping [6]. With this paper we aim to link together the work of two different communities, the first one focusing on machine learning and network compression and the second one focusing on lumping-based aggregation techniques for performance evaluation. Even if a large number of possible efficient compression techniques has already been published, we aim instead to give a formal demonstration on how it is possible to deterministically remove some of the network parameters to obtain a smaller network with the same performance. Our method condenses many different concepts together, such as some of the ideas exploited by tensor decomposition methods, filter pruning and the lumpability used to evaluate the performance of complex systems. The paper is structured as follows. In Sect. 2 we provide a literature review. Section 3 gives the necessary background. Section 4 formally describes our technique exploiting exact lumpability for quotienting NN. Section 5 presents some experimental results. Finally, Sect. 6 concludes the paper. D. Ressi et al. Related Work To the best of our knowledge, the only paper similar to our work is [42], where the authors introduce the classical notion of equivalence between systems in Process Algebra to reduce a neural network into another one semantically equivalent. They propose a filter pruning technique based on some properties of the network that does not need any data to perform the compression. They also define an approximated version of their algorithm to relax some of the strong constraints they pose on the weights of the network. While data free pruning algorithms are convenient when a dataset is incomplete, unbalanced or missing, they usually achieve poorer results compared to data-based compression solutions. Indeed, most pruning techniques usually require at least one stage of fine-tuning of the model. The recovery is often performed in an iterative fashion after removing a single parameter, but there are also techniques that re-train the model only after a certain level of compression has been carried out [4]. As defined in [33] filter pruning techniques can be divided according to property importance or adaptive importance. In the first group we find pruning methods that look at intrinsic properties of the networks, and do not modify the training loss, such as [8,20,25,31,42,45]. Adaptive importance pruning algorithms like [34,36] usually drastically change the loss function, requiring a heavy retrain step and to look for a new proper set of hyper-parameters, despite the fact that they often achieve better performances with respect to property importance methods. Avoiding to re-train the network at each pruning step as in [33,55] is usually faster than other solutions, but there is a higher risk to not being able to recover the performances. Another option consists in deciding which parameters to remove according to the impact they have on the rest of the network [40,58]. Finally, while most of the already mentioned methods focus on removing whole filters or kernels from convolutional layers, some other methods actually target only fully connected layers, or are made to compress classical neural networks [2,51]. In this section we formally introduce the notion of neural network in the style of [42]. Moreover, we recall the concept of exact lumpability as it has been defined in the context of continuous time Markov chains. Neural Networks A neural network is formed by a layered set of nodes or neurons, consisting of an input layer, an output layer and one or more hidden layers. Each node that does not belong to the input layer is annotated with a bias and an activation function. Moreover, there are weighted edges between nodes of adjacent layers. We use the following formal definition of neural NNs Reduction via Lumping Fig. 1. Node v behaviour on input x1 , x2 , . . . , xm For k ∈ N, we denote by [k] the set {0, 1, . . . , k}, by (k] the set {1, . . . , k}, by [k) the set {0, . . . , k − 1}, and by (k) the set {1, . . . , k − 1}. Definition 1 (Neural Network). A Neural Network (NN) is a tuple N = (k, A ct, {S }∈[k] , {W }∈(k] , {b }∈(k] , {A }∈(k] ) where: – – – – k is the number of layers (except the input layer); A ct is the set of activation functions; for ∈ [k], S is the set of nodes of layer with S ∩ S = ∅ for = ; for ∈ (k], W : S−1 × S → R is the weight function that associates a weight with edges between nodes at layer − 1 and ; – for ∈ (k], b : S → R is the bias function that associates a bias with nodes at layer ; – for ∈ (k], A : S → A ct is the activation association function that associates an activation function with nodes of layer . S0 and Sk denote the nodes in the input and output layers, respectively. In the rest of the paper we will refer to NNs in which all the activation association function are constant, i.e., all the neurons of a layer share the same activation function. Moreover, such activation functions A are either ReLU (Rectified Linear Unit) or LeakyReLU, i.e., they are combinations of linear functions. So, from now on we omit the set A ct from the definition of the NNs. Example 1. Figure 1 shows the behaviour of node v in Layer . The input values x1 , x2 , . . . xm are propagated by nodes u1 , u2 , . . . um respectively. Node v computes the ReLU of the weighted sum of the inputs plus the bias. The result of this application is the output of v and it is propagated to z. D. Ressi et al. The operational semantics of a neural network is as follows. Let v : S → R be a valuation for the -th layer of N and V al(S ) be the set of all valuations for the -th layer of N . The operational semantics of N , denoted by [[N ]], is defined in terms of the semantics of its layers [[N ]] , where each [[N ]] associates with any valuation v for layer − 1 the corresponding valuation for layer according to the definition of N . The valuation for the output layer of N is then obtained by the composition of functions [[N ]] . Definition 2. The semantics of the -th layer is the function [[N ]] : V al(S−1 ) → V al(S ) where for all v ∈ V al(S−1 ), [[N ]] (v) = v and for all s ∈ S , v (s ) = A (s ) W (s, s )v(s) + b (s ) . s∈S −1 The input-output semantics of N is obtained by composing these one layer semantics. More precisely, we denote by [[N ]] the composition of the first layers so that [[N ]] (v) provides the valuation of the -th layer given v ∈ V al(S0 ) as input. Formally, [[N ]] is inductively defined by: [[N ]]1 = [[N ]]1 [[N ]] = [[N ]] ◦ [[N ]]−1 ∀ ∈ (k] where ◦ denotes the function composition. We are now in position to define the semantics of N as the input-output semantic function [[N ]] defined below. Definition 3. The input-output semantic function [[N ]] : V al(S0 ) → V al(Sk ) is defined as [[N ]] = [[N ]]k . Lumpability The notion of lumpability has been introduced in the context of performance and reliability analysis. It provides a model aggregation technique that can be used for generating a Markov chain that is smaller than the original one while allowing one to determine exact results for the original process. The concept of lumpability can be formalized in terms of equivalence relations over the state space of the Markov chain. Any such equivalence induces a partition on the state space of the Markov chain and aggregation is achieved by clustering equivalent states into macro-states, reducing the overall state space. Let S be a finite state space. A (time-homogeneous) Continuous-Time Markov Chain (CTMC) over S is defined by a function Q:S ×S →R such that for all u, v ∈ S with u = v it holds that: NNs Reduction via Lumping – Q(u, v) ≥ 0 and – v∈S ,v=u Q(u, v) = −Q(u, u). A CTMC defined over S by Q models a stochastic process where a transition from u to v can occur according to an exponential distribution with rate Q(u, v). Given an initial probability distribution p over the states of a CTMC, one can consider the problem of computing the probability distribution to which p converges when the time tends to infinity. This is the stationary distribution and it exists only when the chain satisfies additional constraints. The stationary distribution reveals the limit behaviour of a CTMC. Many other performance indexes and temporal logic properties can be defined for studying both the transient and limit behaviour of the chain. Different notions of lumpability have been introduced with the aim of reducing the number of states of the chain, while preserving its behaviour [1,6,22,27,38,39,47]. In particular, we consider here the notion of exact lumpability [6,47]. Definition 4 (Exact Lumpability). Let (S , Q) be a CTMC and R be an equivalence relation over S . R is an exact lumpability if for all S, S ∈ R/S , for all v, t ∈ S it holds that: Q(u, v) = Q(u, t). u∈S There exists always a unique maximum exact lumpability relation which allows to quotient the chain by taking one state for each equivalence class and replacing the rates of the incoming edges with the sum of the rates from equivalent states. The notion of exact lumpability is in many applicative domains too demanding, thus providing poor reductions. This issue is well-known for all lumpability notions that do not allow any form of approximation. With the aim of obtaining smaller quotients, still avoiding rough approximations, the notion of proportional lumpability has been presented in [38,39,41] as a relaxation of ordinary lumpability. In this paper instead we introduce to proportional exact lumpability which is defined as follows. Definition 5 (Proportional Exact Lumpability). Let (S , Q) be a CTMC and R be an equivalence relation over S . R is a proportional exact lumpability if there exists a function ρ : S → R>0 such that for all S, S ∈ S /R, for all v, t ∈ S it holds that: Q(u, v) = ρ(t) Q(u, t). ρ(v) u∈S It can be proved that there exists a unique maximum proportional exact lumpability which can be computed in polynomial time. This is true also if (S , Q) is a Labelled Graph instead of a CTMC, i.e., no constraints are imposed on Q. D. Ressi et al. Fig. 2. Proportionally exact lumpable CTMC. Example 2. Figure 2 shows a proportionally exact lumpable Markov chain with respect to the function ρ defined as: ρ(1) = 1, ρ(2) = 3, ρ(3) = 1, ρ(4) = 3, ρ(5) = 1, ρ(6) = 3, ρ(7) = 1, ρ(8) = 1 and the equivalence classes S1 = {1}, S2 = {2, 3, 4}, S3 = {5, 6, 7}, S4 = {8}. Lumping Neural Networks The idea of exploiting exact lumpability for quotienting NN has been proposed in [42] where a notion of pre-sum preserving backward bisimulation has been considered. It can be easily observed that such a notion coincides with that of exact lumpability. The term (probabilistic) bisimulation is standard in the area of Model Checking, where (probabilistic) temporal logical properties are used for both specifying and synthesizing systems having a desired behaviour [5,10, 11,16]. Since such logics usually formalize the behaviours in terms of forward temporal operators, the bisimulation notions tend to preserve the rates of the outgoing edges [48]. However, as proved in [42], in order to preserve the behaviour of a NN it is necessary to refer to the rates/weights of the incoming edges. This is referred to as backward probabilistic bisimulation and coincides with the wellknown notion of exact lumpability used in the area of performances evaluation. In this paper we extend the proposal of [42]. We prove that in the case of ReLU and LeakyReLU activations, proportional exact lumpability preserves the behaviour of the network allowing to obtain smaller quotients. It does not require any retraining step and it ensures the same behaviour on all possible inputs. Moreover, since the neural networks we refer to are acyclic it can be computed in linear time. Definition 6 (Proportional Exact Lumpability over a NN). Let N be a NN. Let R = ∪∈[k) R be such that R is an equivalence relation over S , for all ∈ (k) and R0 is the identity relation over S0 . We say that R is a proportional NNs Reduction via Lumping exact lumpability over N if for each ∈ (k) there exists ρ : S → R>0 such that for all S ∈ S /R , for all S ∈ S−1 /R−1 , for all v, t ∈ S it holds that: ρ (v) ρ (v)b (v) = ρ (t)b (t), u∈S W (u, v) = ρ (t) u∈S W (u, t). There are some differences with respect to the definition of proportional exact lumpability over CTMCs. First, we impose that two equivalent neurons have to belong to the same layer. However, we could have omitted such restriction from the definition and proved that neurons from different layers are never equivalent. This is an immediate consequence of the fact that we refer to acyclic NNs. Moreover, we demand that on input and output nodes the only admissible relation is the identity. This is a substantial difference. Since the nodes in the input layer have no incoming edges the definition of proportional lumpability given over CTMCs allows to collapse them. However, the input nodes in NNs hold the input values that have to be propagated, so they cannot be collapsed. This is true also for the output nodes, since they represent the result of the computation. It can be proved that there always exists a unique maximum proportional exact lumpability over a NN. If we use proportional exact lumpability for reducing the dimension of a NN by collapsing the equivalent neurons, we have to modify the topology and the weights of the NN as formalized below. Definition 7 (Proportional Reduced NN). Let N = (k, {S }∈[k] , {W }∈(k] , {b }∈(k] , {A }∈(k] ) be a NN. Let R be a proportional exact lumpability over N . The NN N /R = (k, {S }∈[k] , {W }∈(k] , {b }∈(k] , {A }∈ (k] ) is defined by: – S = {[v] | [v] ∈ S /R}, where v is an arbitrarily chosen representative for the class; (w,v) – W ([u], [v]) = ρ−1 (u) w∈[u] W ρ−1 (w) ; – b ([v]) = b (v); – A ([v]) = A (v). Despite the arbitrary choice of the representative, we can prove that the reduced NN’s behaviour coincides with that of the initial one over all the inputs. Theorem 1. Let N be a NN and R be a proportional exact lumpability over N . It holds that [[N /R]] = [[N ]]. Proof. Sketch. Let us focus on two neurons v and t belonging to layer 1 that are equivalent in R1 . Let ReLU be the activation function for both of them. On input x1 , x2 , . . . xm for the nodes m u1 , u2 , . . . , um of layer 0 the nodes v and t take values V al(v) = ReLU ( j=1 W1 (uj , v)xj + b1 (v)) and V al(t) = m ReLU ( j=1 W1 (uj , t)xj + b1 (t)), respectively. However, since v and t are equivalent, it holds that: m j=1 W1 (uj , t)xj + b1 (t) = ρ1 (v) W1 (uj , v)xj + b1 (v) ρ1 (t) j=1 D. Ressi et al. Fig. 3. Pruning one node and updating the network. Since ρ1 (v) and ρ1 (t) are positive numbers, we get that: m V al(t) = ReLU ( j=1 W1 (uj , t)xj + b1 (t)) m = ρρ11(v) j=1 W1 (uj , v)xj + b1 (v)) = (t) ReLU ( ρ1 (v) ρ1 (t) V Let now z be a neuron of layer 2. The value of z depends on W2 (v, z)V al(v) + W2 (t, z)V al(t) = (W2 (v, z) + ρ1 (v) W2 (t, z))V al(v) ρ1 (t) So, the definition of W2 takes care of the fact that in the reduced network v represents the equivalence class, while t has been “eliminated”. Such definition ensures that the value of neuron z is unchanged. A formal proof can be obtained generalizing the above arguments. Example 3. Figure 3 shows how the pruning technique works on two nodes v, t. In particular, t input weights are proportionals to v’s. The algorithm proceeds in two steps. Firstly, t is deleted together with all its input and output edges. Secondly, the weight from v to z is modified by adding ρW+1 (t, z). The maximum proportional exact lumpability over N together with the reduced network can be efficiently computed by proceeding top-down from layer 1 to k − 1. Since the network is acyclic, each layer is influenced only by the previous one. Hence, the computation is linear with respect to the number of edges of the network. Theorem 2. Let N be a NN. There exists a unique maximum proportional exact lumpability R over N . Moreover, R and N /R can be computed in linear time with respect to the size of N , i.e., in time Θ( ∈(k] |S−1 × S |). Intuitively, Theorem 1 exploits the following property of ReLU (LeakyReLU): ∀y ∈ R ∀r ∈ R>0 ReLU (r ∗ y) = r ∗ ReLU (y). NNs Reduction via Lumping This allows us to remove some neurons exploiting the proportionality relation with others. In order to guarantee the correctness of the removal on all possible inputs, as stated in Theorem 1, it is not possible to exploit less restrictive relationships than proportionality. This fact can also be formally proved, under the hypothesis that the input set is sufficiently rich. However, one could ask what happens if we move from a simple proportionality relation to a linear dependence. For instance, what happens if in Definition 6 we relax the two equations by considering that t is a linear combination of v1 and v2 , i.e.: ρ (t) ρ (t)b (t) = ρ (v1 )b (v1 ) + ρ (v2 )b (v2 ), u∈S W (u, t) = ρ (v1 ) u∈S W (u, v1 ) + ρ (v2 ) u∈S W (u, v2 ). In this case we could eliminate t by including its contribution on the outgoing network is preserved edges of both v1 and v2 . Unfortunately, the behaviour of the m only for those input values x1 , x2 , . . . , xm which ensure that j=1 W (uj , v1 )xj + m b (v1 ) and j=1 W (uj , v2 )xj + b (v2 ) have the same sign, since ∀y1 , y2 ∈ R, ∀r1 , r2 ∈ R>0 , ReLU (r1 ∗ y1 + r2 ∗ y2 ) = r1 ∗ ReLU (y1 ) + r2 ∗ ReLU (y2 ) iff y1 ∗ y2 ≥ 0. In other terms our analysis points out that reduction techniques based on linear combinations of neurons can be exploited without retraining the network only when strong hypothesis on the sign of the neurons hold. More sophisticated methods that exploit Principal Component Analysis can be seen as a further shift versus approximation, since they do not only involve linear combinations of neurons, but also a base change and the elimination of the less significant dimensions. Experimental Results To assess the robustness of our method we set up some simple experiments where we implemented the neural network pruning by lumping. In particular, we want to show how the accuracy is affected when the weights of the node to prune are not simply proportional to the weights of another node in the same layer, but they are instead a linear combination of the weights of two or more other nodes. We designed and trained a simple Convolutional Neural Network (CNN) made of two convolutional blocks (32 3 × 3 filters each, both followed by a maxpooling layer) and after a simple flatten we add three fully connected layers (fc), with 16, 128 and 10 nodes each, where the last one is the softmax layer. As required by our method, we use only ReLU activations, except for the output layer. We used the benchmark MNIST dataset, consisting of 7000 28 × 28 greyscale images of handwritten digits divided into 10 classes. After a fast training of the model we focused on the second last fully connected layer for our pruning method. We randomly selected a subset of nodes in this layer and then manually overwrote the weights of the rest of the nodes in the D. Ressi et al. same layer as linear combinations of the fixed ones. We then froze this synthetic layer and retrained the network to recover the lost accuracy. The resulting model presents a fully connected layer with 2176 (2048 weight + 128 bias) parameters that can be the target of our pruning method. During the first round of experiments we confirmed that if the weights in the fixed subset have all the same sign, then our method prunes the linearly dependant vectors and the updating step does not introduce any performance loss. Differently, as illustrated in Fig. 4, when the weights in the subset have different sign, the updating step can introduce some loss. This happens only in the case that the weights are a linear combination of two or more of the weights incoming to the other nodes in the synthetic layer. In particular, the accuracy drops faster as the number of nodes involved in the linear combination increases. Fig. 4. Accuracy loss when pruning nodes which incoming weights are linear combination of two, three and four other nodes’ weights in the same layer. In this paper we present a data free filter pruning compression method based on the notion of lumpability. Even though we impose rigid constraints on the weights in order to obtain a reduced network, in doing so we also demonstrate how the resulting model exhibits the same exact behaviour. Regardless the limitations of our method, this work opens the door to a new research field where the aggregation techniques typical of performance evaluation are adopted in network compression, usually explored only by the machine learning community. In the future, we would like to further analyze how our algorithm works for different study cases, and in particular to test how an approximation of the linear NNs Reduction via Lumping dependence would affect the accuracy under different conditions. Another interesting experiment would be to use SVD on the fully connected layers to estimate how many vectors are linearly independent and therefore compute the reduction potentially achieved by our method, especially for quantized networks. Acknowledgements. This work has been partially supported by the Project PRIN 2020 “Nirvana - Noninterference and Reversibility Analysis in Private Blockchains” and by the Project GNCS 2022 “Proprietà qualitative e quantitative di sistemi reversibili”. References 1. Alzetta, G., Marin, A., Piazza, C., Rossi, S.: Lumping-based equivalences in Markovian automata: algorithms and applications to product-form analyses. Inf. Comput. 260, 99–125 (2018) 2. Ashiquzzaman, A., Van Ma, L., Kim, S., Lee, D., Um, T.W., Kim, J.: Compacting deep neural networks for light weight IoT & SCADA based applications with node pruning. In: 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), pp. 082–085. IEEE (2019) 3. Baker, B., Gupta, O., Naik, N., Raskar, R.: Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167 (2016) 4. Blalock, D., Gonzalez Ortiz, J.J., Frankle, J., Guttag, J.: What is the state of neural network pruning? Proc. Mach. Learn. Syst. 2, 129–146 (2020) 5. Bossi, A., Focardi, R., Macedonio, D., Piazza, C., Rossi, S.: Unwinding in information flow security. Electron. Notes Theor. Comput. Sci. 99, 127–154 (2004) 6. Buchholz, P.: Exact and ordinary lumpability in finite Markov chains. J. Appl. Probab. 31, 59–75 (1994) 7. Carroll, J.D., Chang, J.J.: Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition. Psychometrika 35(3), 283–319 (1970). https://doi.org/10.1007/BF02310791 8. Castellano, G., Fanelli, A.M., Pelillo, M.: An iterative pruning algorithm for feedforward neural networks. IEEE Trans. Neural Netw. 8(3), 519–531 (1997) 9. Dai, Z., Liu, H., Le, Q.V., Tan, M.: CoAtNet: Marrying convolution and attention for all data sizes. In: Advances in Neural Information Processing Systems, vol. 34, pp. 3965–3977 (2021) 10. Dang, T., Dreossi, T., Piazza, C.: Parameter synthesis using parallelotopic enclosure and applications to epidemic models. In: Maler, O., Halász, Á., Dang, T., Piazza, C. (eds.) HSB 2014. LNCS, vol. 7699, pp. 67–82. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-27656-4_4 11. Dang, T., Dreossi, T., Piazza, C.: Parameter synthesis through temporal logic specifications. In: Bjørner, N., de Boer, F. (eds.) FM 2015. LNCS, vol. 9109, pp. 213–230. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19249-9_14 12. Deng, L., Li, G., Han, S., Shi, L., Xie, Y.: Model compression and hardware acceleration for neural networks: a comprehensive survey. Proc. IEEE 108(4), 485–532 (2020) 13. Denton, E.L., Zaremba, W., Bruna, J., LeCun, Y., Fergus, R.: Exploiting linear structure within convolutional networks for efficient evaluation. In: Advances in Neural Information Processing Systems, pp. 1269–1277 (2014) 14. Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20(1), 1997–2017 (2019) D. Ressi et al. 15. Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018) 16. Gallina, L., Hamadou, S., Marin, A., Rossi, S.: A probabilistic energy-aware model for mobile ad-hoc networks. In: Al-Begain, K., Balsamo, S., Fiems, D., Marin, A. (eds.) ASMTA 2011. LNCS, vol. 6751, pp. 316–330. Springer, Heidelberg (2011). https:/ /doi.org/10.1007/978-3-642-21713-5_23 17. Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149 (2015) 18. Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural networks. arXiv preprint arXiv:1506.02626 (2015) 19. Harshman, R.A., et al.: Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multimodal factor analysis (1970) 20. He, Y., Liu, P., Wang, Z., Hu, Z., Yang, Y.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4340–4349 (2019) 21. Hillston, J.: A compositional approach to performance modelling. Ph.D. thesis, Department of Computer Science, University of Edinburgh (1994) 22. Hillston, J., Marin, A., Piazza, C., Rossi, S.: Contextual lumpability. In: Proceedings of ValueTools 2013 Conference, pp. 194–203. ACM Press (2013) 23. Howard, A., et al.: Searching for MobileNetV3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1314–1324 (2019) 24. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017) 25. Hu, H., Peng, R., Tai, Y.W., Tang, C.K.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250 (2016) 26. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016) 27. Kemeny, J.G., Snell, J.L.: Finite Markov Chains. Springer, New York (1976) 28. Kolesnikov, A., et al.: Big Transfer (BiT): general visual representation learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12350, pp. 491–507. Springer, Cham (2020). https://doi.org/10.1007/978-3-03058558-7_29 29. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25 (2012) 30. LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Advances in Neural Information Processing Systems, pp. 598–605 (1990) 31. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016) 32. Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013) 33. Lin, M., et al.: HRank: filter pruning using high-rank feature map. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1529–1538 (2020) 34. Lin, S., et al.: Towards optimal structured CNN pruning via generative adversarial learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2790–2799 (2019) NNs Reduction via Lumping 35. Liu, Y., Sun, Y., Xue, B., Zhang, M., Yen, G.G., Tan, K.C.: A survey on evolutionary neural architecture search. IEEE Trans. Neural Netw. Learn. Syst. 34(2), 550–570 (2023) 36. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744 (2017) 37. Ma, N., Zhang, X., Zheng, H.-T., Sun, J.: ShuffleNet V2: practical guidelines for efficient CNN architecture design. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 122–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_8 38. Marin, A., Piazza, C., Rossi, S.: Proportional lumpability. In: André, É., Stoelinga, M. (eds.) FORMATS 2019. LNCS, vol. 11750, pp. 265–281. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29662-9_16 39. Marin, A., Piazza, C., Rossi, S.: Proportional lumpability and proportional bisimilarity. Acta Informatica 59(2), 211–244 (2022). https://doi.org/10.1007/s00236021-00404-y 40. Molchanov, P., Mallya, A., Tyree, S., Frosio, I., Kautz, J.: Importance estimation for neural network pruning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11264–11272 (2019) 41. Piazza, C., Rossi, S.: Reasoning about proportional lumpability. In: Abate, A., Marin, A. (eds.) QEST 2021. LNCS, vol. 12846, pp. 372–390. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85172-9_20 42. Prabhakar, P.: Bisimulations for neural network reduction. In: Finkbeiner, B., Wies, T. (eds.) VMCAI 2022. LNCS, vol. 13182, pp. 285–300. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-94583-1_14 43. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32 44. Ren, P., et al.: A comprehensive survey of neural architecture search: challenges and solutions. ACM Comput. Surv. (CSUR) 54 (4), 1–34 (2021) 45. Ressi, D., Pistellato, M., Albarelli, A., Bergamasco, F.: A relevance-based CNN trimming method for low-resources embedded vision. In: Bandini, S., Gasparini, F., Mascardi, V., Palmonari, M., Vizzari, G. (eds.) AIxIA 2021 – Advances in Artificial Intelligence, AIxIA 2021. Lecture Notes in Computer Science, vol. 13196, pp. 297– 309. Springer, Cham (2022). https://doi.org/ 10.1007/978-3-031-08421-8_20 46. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018) 47. Schweitzer, P.: Aggregation methods for large Markov chains. In: Procedings of the International Workshop on Computer Performance and Reliability, pp. 275–286. North Holland (1984) 48. Sproston, J., Donatelli, S.: Backward stochastic bisimulation in CSL model checking. In: 2004 First International Conference on the Quantitative Evaluation of Systems, QEST 2004. Proceedings, pp. 220–229. IEEE (2004) 49. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015) 50. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference On Computer Vision and Pattern Recognition, pp. 2818–2826 (2016) D. Ressi et al. 51. Tan, C.M.J., Motani, M.: DropNet: reducing neural network complexity via iterative pruning. In: International Conference on Machine Learning, pp. 9356–9366. PMLR (2020) 52. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019) 53. Tan, M., Le, Q.V.: EfficientNetV2: smaller models and faster training. arXiv preprint arXiv:2104.00298 (2021) 54. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31(3), 279–311 (1966). https://doi.org/ 10.1007/BF02289464 55. Wang, Z., Xie, X., Shi, G.: RFPruning: a retraining-free pruning method for accelerating convolutional neural networks. Appl. Soft Comput. 113, 107860 (2021) 56. Xiao, L., Bahri, Y., Sohl-Dickstein, J., Schoenholz, S., Pennington, J.: Dynamical isometry and a mean field theory of CNNs: how to train 10,000-layer vanilla convolutional neural networks. In: International Conference on Machine Learning, pp. 5393–5402. PMLR (2018) 57. Yu, J., Wang, Z., Vasudevan, V., Yeung, L., Seyedhosseini, M., Wu, Y.: CoCa: contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917 (2022) 58. Yu, R., et al.: NISP: pruning networks using neuron importance score propagation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9194–9203 (2018) 59. Zhang, X., Zhou, X., Lin, M., Sun, J.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856 (2018) 60. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 (2016) Knowledge Enhanced Neural Networks for Relational Domains Alessandro Daniele(B) and Luciano Serafini Data and Knowledge Management Research Unit, Fondazione Bruno Kessler, Trento, Italy {daniele,serafini}@fbk.eu Abstract. In the recent past, there has been a growing interest in Neural-Symbolic Integration frameworks, i.e., hybrid systems that integrate connectionist and symbolic approaches to obtain the best of both worlds. In this work we focus on a specific method, KENN (Knowledge Enhanced Neural Networks), a Neural-Symbolic architecture that injects prior logical knowledge into a neural network by adding on its top a residual layer that modifies the initial predictions accordingly to the knowledge. Among the advantages of this strategy, there is the inclusion of clause weights, learnable parameters that represent the strength of the clauses, meaning that the model can learn the impact of each rule on the final predictions. As a special case, if the training data contradicts a constraint, KENN learns to ignore it, making the system robust to the presence of wrong knowledge. In this paper, we propose an extension of KENN for relational data. One of the main advantages of KENN resides in its scalability, thanks to a flexible treatment of dependencies between the rules obtained by stacking multiple logical layers. We show experimentally the efficacy of this strategy. The results show that KENN is capable of increasing the performances of the underlying neural network, obtaining better or comparable accuracies in respect to other two related methods that combine learning with logic, requiring significantly less time for learning. In the last decade, deep learning approaches gained a lot of interest in the AI community, becoming the state of the art on many fields, such as Computer Vision [17], Machine Translation [2], Speech Recognition [14], etc. Indeed, Neural networks (NNs) are suited for pattern recognition, even in the presence of noisy data. They are particularly good at mapping low-level perceptions to more abstract concepts (for instance, going from images to classes). However, it is hard for a NN to reason with these high-level abstractions. Furthermore, NNs are demanding in terms of training data. On the other hand, pure logical approaches are not suited for learning from low-level features and they struggle in the presence of noise. Nevertheless, they perform well in reasoning with highly abstract concepts and learning from a small number of samples. Given these opposite strengths and weaknesses, it is not a surprise that a lot of interest c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 91–109, 2023. https://doi.org/10.1007/978-3-031-27181-6_7 A. Daniele and L. Serafini has been drawn toward Neural-Symbolic (NeSy) systems. Indeed, the goal is to combine these two paradigms to obtain the best of the two worlds. Among NeSy methods there is KENN (Knowledge Enhanced Neural Network ) [6], a model composed of a Neural Network enhanced with additional layers which codify logical knowledge. KENN has multiple advantages over other NeSy methods, such as its capacity to learn clause weights and the ability to impose the knowledge not only during training but even at inference time. In particular, KENN showed remarkable results on the Predicate Detection task of Visual Relationship Detection Dataset (VRD Dataset) [19] using a manually curated prior knowledge proposed by [9], outperforming the previous state of the art results, with really good performances on the Zero Shot Learning subtask [6]. Moreover, it outperformed Logic Tensor Networks [29], one of its major competitors, using the same knowledge. Despite its good empirical results, KENN has been applied only on multilabel classification tasks with no relational data. Indeed, a limitation of KENN resides in its inability to take into account binary predicates. This is because KENN expects the NN’s predictions to be stored in a matrix format, where the columns represent different unary predicates and the rows their possible groundings (i.e., substitutions of the free variable for such predicates). For this reason, it is not straightforward to apply KENN to relational data, where binary predicates are available. In this paper, we propose an updated version of KENN which can deal with relational data. Particular attention was paid to defining a scalable strategy to deal with binary predicates, obtaining good performances in terms of execution time. Indeed, KENN assumes independence between the logical rules, allowing for scalable inclusion of the underlying knowledge. However, the assumption is often violated in real scenarios, in particular in the contexts of relational domains. To deal with this problem, we propose a strategy that consists of adding multiple logical layers inside the model. We provide proof of the efficacy of this strategy in a simple scenario with two logical rules. Additionally, we tested this idea on Citeseer, a dataset for Collective Classification [28], showing that the additional layers improve the performance of the model. Moreover, the experiments on this dataset provide a comparison between KENN and two other approaches: Semantic Based Regularization (SBR) [8] and Relational Neural Machines (RNM) [22]. Related Works Many previous works attempt to combine learning models with logical knowledge. Among them there is Statistical Relational Learning (SRL), a subfield of Machine Learning that aims at applying statistical methods in domains that exhibit both uncertainty and relational structure [16]. Generally speaking, SRL deals with the knowledge either by combining logic rules with probabilistic graphical models (e.g., Markov Logic Networks [25], and Probabilistic Soft Logic (PSL) [1]) or by extending logic programming languages to handle uncertainty (e.g., ProbLog [7]). Relational KENN The recent achievements of deep learning methods lead to a renewed interest in another line of research, called Neural-Symbolic Integration, which focuses on combining neural network architectures with logical knowledge [3]. This can be achieved in multiple ways depending on the role of the knowledge. For instance, works like TensorLog [5], Neural Theorem Prover (NTP) [26,27], DeepProbLog [21], Neural Logic Machines [10], and NeuralLog [13] focus on the development of differentiable approaches for reasoning, which can be used in combination with neural networks. Another line of research comes from methods like ∂ILP [4,11], and Neural Logic Rule Layer (NLRL) [24]. In these cases, the goal is to learn general knowledge from the data, either from scratch or by refining an initial knowledge. Finally, more related to our purposes, some methods focus on learning in the presence of prior knowledge, which acts as additional supervision. In this section, we are going to focus on these types of methods, since KENN falls in this category. There are mainly two approaches for learning in the presence of prior knowledge: the first consists of treating logical rules as constraints on the predictions of the neural network. The problem is reduced to maximize the satisfiability of the constraints and can be efficiently tackled by adding a regularization term in the Loss function. The second approach is to modify the neural network by injecting the knowledge into its structure. Two notable examples of regularization approaches are Logic Tensor Network (LTN) [29] and Semantic Based Regularization (SBR) [8]. Both methods maximize the satisfaction of the constraints, expressed as FOL formulas, under a fuzzy logic semantic. A similar strategy is employed also by Semantic Loss Function [30], but instead of relying on fuzzy logic, it optimizes the probability of the rules being true. Nevertheless, this approach is restricted to propositional logic. [12] introduces DL2. Nonetheless, it can be used only in the context of regression tasks, where the predicates correspond to comparison constraints (e.g. =, =, ≤). [23] also proposes a method that regularizes the Loss, but they focus on a specific task of Natural Language Processing. Their approach differs from the others because it makes use of adversarial examples to calculate the regularization term. Finally, in [15], a distillation mechanism is used to inject FOL rules: here a teacher network (which encodes the rules) is used to regularize the Loss applied to a student network. Approaches based on regularization force the constraints satisfaction solely at training time. As a consequence, there are no guarantees that they will be satisfied at inference time as well. Instead, model-based methods inject knowledge directly into the model structure, and they are naturally capable of enforcing the knowledge at inference time. Another advantage is the possibility to learn a weight that codifies the importance of a logical rule directly from the data. This is no possible at all with methods based on regularization, since the logical formulas are directly codified inside the Loss function. Among the model-based approaches there is KENN, a framework that injects knowledge on top of the NN model through an additional layer which increases the satisfaction of the constraints in a fuzzy logic semantic. Another approach is provided by Li and Srikumar who recently proposed a method that codifies the A. Daniele and L. Serafini logical constraints directly into the neural network model [18]. However, they restrict the rules to implications with exactly one consequent and they do not provide the possibility to learn clause weights, which in their system are added as hyper-parameters. Going in the same direction, [22] proposed Relational Neural Networks (RNM). RNM can be also inserted in the set of approaches that add the logic directly into the model and, as the best of our knowledge, it is the only method other than KENN which is capable of integrating logical knowledge with a neural network while learning the clause weights. RNM integrates a neural network model with a FOL reasoner. This is done in two stages: in the first one, the NN is used to calculate initial predictions for the atomic formulas; in the second stage a graphical model is used to represent a probability distribution over the set of atomic formulas. To obtain the final predictions a Maximum a Posteriori (MAP) estimation is performed, finding the most probable assignment to the grounded atoms given the output of the NN and the set of constraints. At a high-level RNM approach is similar to KENN, since in both cases a NN makes initial predictions and a post elaboration step is applied to such predictions to provide the final classification. However, RNM requires to solve an optimization problem at inference time and after each training step. This has the advantage of considering all the logical rules together at the same time at the expense of an increased computational effort. Contrary, in KENN each rule is considered separately from the others, and the second stage is directly integrated inside the model as a differentiable function that can be trained end-to-end with the NN. However, with this strategy there could be some contradictory changes when combining multiple clauses with the same predicates. We will further analyze this aspect in Sect. 3.4, proposing a strategy to handle this limitation. Moreover, in Sect. 4, we analyze this strategy empirically. Knowledge Enhanced Neural Networks We define the prior knowledge in terms of formulas of a function-free first order language L. Its signature is defined with a set of domain constants C {a1 , a2 , ...am } and a set of predicates P {P1 , P2 ...Pq }. In our setting, predicates can be unary or binary/Binary predicates can express relations among pairs of objects in the domain, e.g. F riends(a, b) states that person a is a friend of b. The prior knowledge is defined as a set of clauses: K {c1 , c2 , ...cr }. A clause k is a disjunction of literals, each of which is a possibly negated atom: c li , i=1 where k is the number of literals in c and li is the ith literal. We assume that there are no repeated literals. Since we are interested in representing only general knowledge, the literals do not contain any constant, only variables that are assumed to be universally quantified. If the predicate is binary, the two variables are x and y, otherwise only x. When an entire clause contains only variable x (i.e., only unary predicates), we call it unary. Similarly, if it contains both x and y we call it binary 1 . 1 We restrict to the case where clauses contain at most two variables. Relational KENN As an example, the clause ¬Smoker(x) ∨ Cancer(x) is unary and states that all smokers have also cancer (notice that the clauses are not assumed to be hard constraints). Instead, the clause ¬Smoker(x) ∨ ¬F riends(x, y) ∨ Smoker(y) is binary. It states that if a person x is a smoker and he is a friend of another person y, then y is also a smoker. We will use extensively this clause in the remaining of the paper, referring to it as cSF . We define the grounding of a unary clause c, denoted by c[a], as the clause obtained by substituting the x variable with constant a. Similarly, if c is binary, its grounding c[a, b] is obtained by substituting x and y with a and b respectively. For instance, the grounding cSF [a, b] of the clause defined in Eq. 1 correspond to ¬Smoker(a) ∨ ¬F riends(a, b) ∨ Smoker(b). 3.1 KENN Architecture Suppose we have a NN for a classification task which takes as input a matrix x ∈ Rd×n containing n features for d samples, and returns an output y ∈ [0, 1]d×q which contains the predictions for q classes corresponding to the q predicates. A prior knowledge K is also provided. It can be used by KENN to improve the predictions of the NN. Figure 1(left) shows a high-level overview of KENN where a residual layer, called Knowledge Enhancer (KE), is inserted between the NN and the final activation function. The role of KE is to revise the final predictions returned by the NN in order to increase the truth value of each clause c ∈ K. It does so by calculating a residue δ, a matrix that is added to the predictions of the NN. Fig. 1. Model architecture. Left: KENN model overview. Right: Knowledge Enhancer. The KE works in the pre-activations space, i.e. on z, and the activation function (σ) is called later. In order for KENN to work, the activation function must be monotonic and return values in the range [0, 1]2 . Since both NN and KE are 2 For more details on why the KE is applied on pre-activations, please refer to [6]. A. Daniele and L. Serafini differentiable, the entire architecture is differentiable end-to-end, making it possible to apply back-propagation algorithm on the whole model. Figure 1(right) shows the architecture of KE, which calculates the residual matrix δ. More in details, for each clause c ∈ K, the KE contains a submodule, the Clause Enhancer (CE), which proposes the changes δc to be applied on the NN’s preactivations in order to increase the satisfaction of c. Indeed, the CE computes a soft differentiable approximation of a function called t-conorm boost function (TBF). Intuitively, a TBF is a function φ : Rk → Rk+ that proposes the changes to be applied on the pre-activations z of k truth values, such that ⊥(σ(z + φ(z))) ≥ ⊥(σ(z)), where ⊥ : [0, 1]k → [0, 1] is a t-conorm function, used in fuzzy logic to represent the semantics of the disjunction operator3 . In [6] it has been defined the function 1 if i = argmaxnj=1 zj (2) φ(z)i = 0 otherwise and proved that such a function is the optimal TBF for the G¨ odel t-conorm. KENN employs the softmax function as a continuous and differentiable approximation of φ. The δc matrices are combined linearly inside the KE to obtain the final change δ to be applied on the NN’s predictions, and finally the δ is summed to the initial pre-activations z and passed to the activation function: yP (a) = σ zP (a) + wc · δc[a],P (a) (3) c∈K P (x)∈c where wc is the clause weight, P (a) a grounded atom, yP (a) its final prediction, and zP (a) the NN’s pre-activations. Finally, δc[a],P (a) is the change applied to P (a) based on the grounded clause c[a]: φ(zc )P (a) if P (a) ∈ c[a] δc[a],P (a) = (4) −φ(zc )¬P (a) if ¬P (a) ∈ c[a] where zc are the pre-activations of literals of c. Note that applying a linear combination of the δc matrices can be done under the assumption of independence between the clauses. When multiple clauses share common predicates the changes proposed by KENN could only partially improve the satisfaction of the knowledge. We will further analyze this problem in Sect. 3.4. Note that, when the NN predictions satisfy the constraints, the effect of the KE is to increase the confidence of the current predictions. Therefore, if the NN predictions are correct with respect to the ground truth, the clause weights tend to increase during learning. 3.2 Extending KENN for Relational Domains In the architecture defined so far, the groundings involve a single object and z is defined as a matrix, where columns represent predicates and rows constants. 3 In [6], function φ is called δ. Here we changed the name to avoid confusions with its output which is also referred as δ. Relational KENN Figure 2(left) introduces the representation of z: it is defined as a matrix such that the element zij contains the pre-activation of Pj (ai ), with Pj the j th predicate and ai the ith constant. Note that this kind of representation is common when working with neural networks since the columns (predicates) correspond to the labels and the rows (groundings) to the samples. An important aspect of this representation lies in the fact that each grounded atom can be found in the matrix exactly one time. This allows to parallelize computations of Eq. 3 since a grounded clause involves only atoms in the same row, and each row can be managed in parallel inside a GPU. This can be done only if the same atom does not appear in multiple rows, since the changes are applied independently to each row and are not aggregated together. This property always holds with unary clauses. Fig. 2. The representation of NN’s final pre-activations. Left: unary case. Right: representation of relational data. Pre-activations are represented as integers instead of reals to simplify the figure. (Color figure online) To represent relational data, we extend KENN with an extra matrix zB , which contains the binary predicates’ pre-activations. For uniformity of notation we use zU to denote the unary matrix z of the not relational KENN. Matrix zB contains one row for every pair of objects we are interested in and a column for each binary predicate. Figure 2(right) shows this representation using the classical Smoker-Friends-Cancer example, where the domain is composed of three constants (persons) C = {a, b, c}, the unary predicates are S and C (for Smoker and Cancer), and a binary predicate F (for F riends). The blue box shows the graph representation with nodes and edges labelled with pre-activation of unary and binary predicates respectively. The grey box shows the corresponding matrix representation used by KENN. Notice that it is not required that the entire graph is computed by the NN. For instance, in the experiments on Citeseer, the Cite predicate is provided directly as a feature (see Sect. 4). The architecture of KENN for relational domains is very similar to the architecture of traditional KENN of Fig. 1, with KE substituted by a Relational KE (RKE). From a high level perspective, the RKE differs from the traditional KE on the amount of inputs and outputs. As seen before, in the relational case the pre-activations are divided in two different matrices (zU and zB ) and, as a consequence, also the δ matrix and predictions y are now splitted in unary and binary matrices (δU and δB for the residues, yU and yB for the final predictions). A. Daniele and L. Serafini The RKE has the same role as the KE in the unary case. However, it is capable to consider also binary predicates. When binary knowledge is available, additional steps are required since the independence between object can not be assumed anymore. Let KU be the set of unary clauses and KB the set of binary clauses. The prior knowledge is now defined as K = KU ∪ KB . The idea is to apply the KE to these two sets separately. Equation 3 can decomposed using the new defined par be tition of the knowledge: yA = σ zA + c∈KU [C] wc · δc,A + c∈KB [C] wc · δc,A , where A is a grounded atom (i.e. P (a) or P (a, b), depending on the arity of P ). We define δKU as the changes deriving from unary clauses: wc · δc[a],P (a) (5) δKU ,P (a) = c∈KU P (x)∈c Similarly, δKB are the changes calculated from KB . Notice that the approach defined so far can be directly applied to the unary knowledge KU to calculate δU since traditional KE can manage unary knowledge. Indeed, internally the RKE contains a standard KE which manages the unary clauses. We need to define a strategy to deal with binary clauses. Indeed, when a clause c contain two variables, a grounding of a unary predicate may occur in multiple groundings of c. For instance, consider the clause of Eq. 1. The two groundings cSF [a, b] and cSF [b, c] share a common grounded atom: Smoker(b). For this reason, when dealing with the predictions of a unary predicate in a relational domain, we need to account for such repetitions: (6) δKB ,P (a) = wc · δc[a,b],P (a) + wc · δc[b,a],P (a) b=a c∈KB P (x)∈c c∈KB P (y)∈c Putting all together, the predictions yP (a) for a grounded unary predicate P are: yP (a) = σ(zP (a) + δKU ,P (a) + δKB ,P (a) ) The predictions for a binary predicate R can be found only in binary clauses and any possible grounding of R can be found in only one corresponding grounding of each clause yR(a,b) = σ(zR(a,b) + δKB ,R(a,b) ), with δKB ,R(a,b) = wc · δc[a,b],R(a,b) c∈KB R(x,y)∈c Time Complexity Here we analyze the time complexity of an RKE layer with respect to domain size m, number of predicates |P|, and number of rules |K|. We also assume the maximum number L of literals in a clause to be a small constant. Let us first analyze the time complexity for calculating the δc[a] used in Eqs. 5, and 6. Each δc[a],P (a) can be calculated in time O(1) (see Eq. 4). Computing δc[a] Relational KENN also requires constant time. The sum of Eq. 5 require time O(|K|), which is the time necessary to compute δKU ,P (a) . Note that neural networks are usually run on GPUs, where the computations can be parallelized. Assuming enough parallel processes (|K| in this case), a sum can be performed in a time logarithmic with respect to the number of addends, and complexity for δKU ,P (a) becomes O(log(|K |)). Finally, Eq. 5 needs to be calculated for all the grounded unary predicates P (a), for a total time of O(m · |P| · |K|) in a single process, and O(log(|K|)) with multiple parallel processes (each of the grounded atom can be considered independently from the others). With a similar reasoning, we found the time complexity of Eqs. 6 and 8 to be O(m2 ·|P|·|K|). Note that with enough parallel processes we can compute all the deltas in O(log(m) + log(|K|)). 3.4 Treatment of Dependencies Among the Rules In the previous section we showed the efficacy of the method in terms of execution time, which can be achieved thanks to the assumption of independence. However, when this assumption is violated, KENN does not provide any guarantees on the satisfaction of the knowledge. As an example, suppose that we have two grounded clauses c1 : ¬A ∨ B and c2 : ¬B ∨ C with their respective clause enhancers CE1 and CE2 , where A, B and C are grounded unary or binary predicates. The atom B appears in both clauses with opposite signs. Since the CE increase the highest literal value (see Eq. 2), if A < B 4 and C < ¬B, then CE1 increases B and CE2 decreases it. As a consequence, the satisfaction of only one between c1 and c2 is increased. The satisfaction of the entailed clause ¬A ∨ C is also not improved. For any grounded atom G, lets define G(0) as its initial prediction and G(i) as the prediction of the ith KE layer. Moreover, suppose that all KEs share the same clause weights w1 and w2 (for c1 and c2 respectively). From Eqs. 3 and 4 we can derive B (1) = B (0) +w1 −w2 , and ¬B (1) = ¬B (0) +w2 −w1 . If w1 ≥ w2 , then A(1) = A(0) < B (0) ≤ B (1) . As a consequence, the first rule will increase again B even at the next KE layer. On the other hand, the value of ¬B is reduced, which means that there is an increased chance for C (1) > ¬B (1) , which would solve the problem since CE2 would increase C instead of ¬B. Notice that, since the weights are the same at each level, it is always true that ¬B (i+1) ≤ ¬B (i) , meaning that with enough KE layers both clauses’ satisfaction will be increased (and as a consequence, also their entailments). The problem analyzed in this section becomes even more relevant in relational domains since in these contexts an atom can be shared not only by multiple clauses but also by different groundings of the same clause (for instance, in cSF [a, b] and cSF [b, c]). For this reason, in these contexts stacking multiple RKEs is recommended (more details in Sect. 4.2). Evaluation of the Model In this section, the relational extension of KENN is tested on the task of Collective Classification: given a graph, we are interested in finding a classification 4 With an abuse of notation, we use atoms symbols to refer also to their truth value. A. Daniele and L. Serafini for its nodes using both features of the nodes (the objects) and the information coming from the edges of the graph (relations between objects) [28]. In Collective Classification, there are two different learning tasks: inductive and transductive learning. In inductive learning, there are two separate graphs, one for training and the other for testing. On the contrary, in transductive learning, there is only one graph that contains nodes both for training and testing. In other words, in inductive learning, there are no edges between nodes for training and testing, while in transductive learning there are. The tests have been performed on both tasks to analyze the behavior of KENN in the contexts of relational domains. In particular, we tested KENN with a varying number of KEs layers to validate the proposal of Sect. 3.45 . 4.1 Experimental Setup We followed the evaluation methodology of [22], where the experiments have been carried out on Citeseer dataset [20] using SBR and RNM. The Citeseer dataset used in the evaluation is a citation network: the graph’s nodes represent documents and the edges represent citations. The nodes’ features are bag-ofwords vectors, where an entry is zero if the corresponding word of the dictionary is absent in the document, and one if it is present. The classes to be predicted represent possible topics for a document. The dataset contains 3312 nodes that must be classified in 6 different classes: AG, AI, DB, IR, ML, and HCI. The classification is obtained from the 3703 features of the nodes, with the addition of the information coming from the citations (4732 edges). We use the same NN and knowledge as in [22], allowing for the comparison with SBR and RNM. The NN is a dense network with 3 hidden layers, each with 50 hidden nodes and ReLU activation function. The knowledge consists of six rules obtained by substituting the topic T in ¬T (x) ∨ ¬Cite(x, y) ∨ T (y) with all the classes, codifying the idea that papers cite works of the same topic. Tests have been conducted by selecting 10%, 25%, 50%, 75%, and 90% of nodes for training to evaluate the efficacy of the three methods on the varying of the training set dimension. For each of these values, the training and evaluation were performed 100 times, each with a different split of the dataset. At each run the training set is created by selecting random nodes of the graph, with the constraints that the dataset must be balanced. 4.2 Figure 3 shows the test accuracies obtained by KENN while increasing the number of KEs layers, starting from 0 (corresponding to the NN accuracy) up to 6. Note that, for each line in the figure, there is a surrounding border corresponding to a 99% confidence interval. To calculate the intervals, we assumed the distribution of improvements obtained by the injection of the logical rules to 5 Source code of the experiments are available on https://github.com/rmazzier/ KENN-Citeseer-Experiments. Relational KENN Fig. 3. Accuracies of KENN at the varying of KEs layers. (Color figure online) be a normal distribution (see figures in Appendix B and C). We also computed the p-values for each setting, assuming as null Hypothesis that the distribution of accuracies of the NN is the same as KENN. Since the number of runs are quite high, the resulting p-values are very small. For this reason, we can safely reject the null Hypothesis, and we are very confident that the improvements given by KENN do not depend on the random initialization of the models’ parameters or the specific choices of the splits of the dataset. More in detail, we found p-values in the range from 8.2e−42 to 1.6e−09 in the inductive case, and from 53e−72 to 2.1e−23 for the transductive one. The only exception is with 90% of the samples in the inductive scenario where the p-value is 0.35. This is because the improvements over the NN are very small. Indeed, in both learning paradigms, the effect of the knowledge is reduced when the amount of available data is larger. This behavior is consistent with the simple intuition that, when the training data is scarce, the usage of knowledge should bring higher benefits. A more important result coming from these experiments is the fact that in all cases adding a new KE layer does not reduce the test accuracy. On the contrary, most of the time the metric is increased until a certain number of layers is reached, and after that, the accuracy stabilizes. This behavior is in line with the discussion of Sect. 3.4 and confirms the efficacy of the proposed strategy to deal with the violation of independence assumption. Finally, Fig. 3 provide also a measure of the amount of information carried out by the knowledge. For instance, consider blue and yellow lines, corresponding to a training set with 25% and 50% of the samples, respectively. In the inductive scenario, the accuracy obtained with 25% with the addition of the knowledge is almost the same as the standard NN with 50% of the data (even higher in the transductive scenario). In this case, adding the knowledge has the same effect of doubling up the training data! Indeed, one of the main motivations behind Neural-Symbolic Integration consists in reducing the required amount of training samples since collecting labeled data is costly in practice. A. Daniele and L. Serafini Table 1. Improvements in terms of accuracy on inductive and transductive learning. Inductive % Tr SBR RNM KENN SBR 0.005 0.040 0.008 0.035 0.005 0.019 0.002 0.009 0.058 0.057 0.003 0.009 0.001 0.054 0.054 0.043 Comparison with Other NeSy Frameworks Table 1 shows a comparison of KENN with SBR and RNM. We used the results of KENN with 3 KEs since more layers do not provide a significant advantage (see Sect. 4.2). As we can see from the table, in the inductive case SBR produces much lower improvements compared to the other two methods. Note that these results are in line with previous results obtained on VRD dataset, where another regularization approach (LTN) was compared with KENN [6]. Indeed, the results obtained in both VRD and Citeseer suggest better performances of model-based approaches as compared to the ones based on regularization. Note that methods based on regularization of the loss do not impose the knowledge at inference time. In the transductive scenario, the situation is different and SBR behaves similarly to the other two. Indeed, in this case, citations between training and test nodes are available and there is no distinction between training and inference. Finally, the results suggest that KENN is particularly useful when the training data available is scarce. On the contrary, when data is abundant, our results tend to degrade faster than RNM and SBR. However, the greatest advantage of KENN over other architectures is its scalability. This is confirmed by the comparison of the execution times of the three methods: we found KENN to be very fast as compared to the other two methods with an average of 7.96 s required for a single run, as compared to the NN which requires 2.46 s (on average of 1.83 s for each KE layer). A run of SBR cost 87.36 s (almost 11 times slower than KENN), while RNM required 215.69 s per run (27 times slower)6 . All the experiments have been run on the same architecture, an NVIDIA Tesla v100. Relational KENN KENN is a NeSy architecture that injects prior logical knowledge inside a neural network by stacking a residual layer on its top. In [6], it proved to be able to effectively inject knowledge in the context of multi-label classification tasks. In this work, we extended KENN for relational domains, where the presence of both unary and binary predicates doesn’t allow for the usage of the simple tabular representation of the data used in the previous version of the framework. Moreover, we propose a strategy to deal with the violation of the independence assumption made by KENN. The experiments on Citeseer show the effectiveness of this strategy, obtaining statistically relevant improvements over the NN performances, meaning that KENN can successfully inject knowledge even in the presence of relational data. Finally, KENN provided quality results also in comparison with other two NeSy frameworks. In particular, the large difference in performances between KENN/RNM and SBR provides additional evidence in support of model-based approaches in comparison to regularization ones, with KENN the best option in terms of scalability. However, the scalability of KENN largely depends on the fixed structure of the knowledge, with only universally quantified formulas allowed. This is a limitation of KENN in comparison with other frameworks, like LTN, which support the usage of existential quantifiers. Appendix A Relational KENN Architecture See Fig. 4. A. Daniele and L. Serafini Fig. 4. KENN for relational domains: (a) the architecture of KENN. A graph (blue box) is represented in terms of the two matrices zU and zB and given as input to the Relational KE (RKE). Multiple RKEs are stacked together and the activation function is called; (b) the architecture of the RKE module: the unary knowledge is enforced directly by the KEU ; the binary knowledge is enforced by the KEB on matrix zM , which is created by joining zU with zB on the pre-elab step. zM contains multiple instances of the same atoms, for instance S[a] (red cells). As a consequence, multiple residues are returned for a single atom, and such values are summed in the post-elab (blue cells). Pre and post elaboration steps are efficiently implemented using TensorFlow gather and scatter nd functions. (Color figure online) Relational KENN Results Distribution - Inductive Learning See Fig. 5. Fig. 5. Left: distributions of accuracies achieved by the NN and KENN (3 KE layers) on 100 runs of Inductive Learning; Right: distributions of the improvements in accuracy obtained by the injection of the logical rules. A. Daniele and L. Serafini Results Distribution - Transductive Learning See Fig. 6. Fig. 6. Left: distributions of accuracies achieved by the NN and KENN (3 KE layers) on 100 runs of Transductive Learning; Right: distributions of the improvements in accuracy obtained by the injection of the logical rules. Relational KENN Comparison with SBR and RNM Test Accuracy See Fig. 7. Fig. 7. Comparison between KENN (3 KE layers), SBR and RNM in terms of accuracy improvements over the NN. Execution Time See Fig. 8. Fig. 8. Execution time in logarithmic scale of the different methods. A bar labelled with number i corresponds to KENN with i KEs layers (0 represents the NN without logic). A. Daniele and L. Serafini References 1. Bach, S.H., Broecheler, M., Huang, B., Getoor, L.: Hinge-loss Markov random fields and probabilistic soft logic. J. Mach. Learn. Res. 18(109), 1–67 (2017) 2. Bahdanau, D., Cho, K., Bengio, K.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014) 3. Besold, T.R. et al.: Neural-symbolic learning and reasoning: a survey and interpretation. CoRR, abs/1711.03902 (2017). http://arxiv.org/abs/1711.03902 4. Campero, A., Pareja, A., Klinger, T., Tenenbaum, J., Riedel, S.: Logical rule induction and theory learning using neural theorem proving. arXiv preprint arXiv:1809.02193 (2018) 5. Cohen, W.W.: TensorLog: a differentiable deductive database. arXiv preprint arXiv:1605.06523 (2016) 6. Daniele, A., Serafini, L.: Knowledge enhanced neural networks. In: Nayak, A.C., Sharma, A. (eds.) PRICAI 2019. LNCS (LNAI), vol. 11670, pp. 542–554. Springer, Cham (2019). ISBN: 978-3-030-29908-8. https://doi.org/10.1007/ 978-3-030-299088 43 7. De Raedt, L., Kimmig, A., Toivonen, A.: ProbLog: a probabilistic prolog and its application in link discovery. In: IJCAI, Hyderabad, vol. 7, pp. 2462–2467 (2007) 8. Diligenti, M., Gori, M., Sacc` a, C.: Semantic-based regularization for learning and inference. Artif. Intell. 244, 143–165 (2017) 9. Donadello, I.: Semantic image interpretation - integration of numerical data and logical knowledge for cognitive vision. Ph.D. thesis, Trento Univ., Italy (2018) 10. Dong, H., Mao, J., Lin, T., Wang, C., Li, L., Zhou, D.: Neural logic machines. arXiv preprint arXiv:1904.11694 (2019) 11. Evans, R., Grefenstette, E.: Learning explanatory rules from noisy data. J. Artif. Intell. Res. 61, 1–64 (2018) 12. Fischer, M., Balunovic, M., Drachsler-Cohen, D., Gehr, T., Zhang, C., Vechev, M.: DL2: training and querying neural networks with logic. In: International Conference on Machine Learning, pp. 1931–1941 (2019) 13. Guimar˜ aes, V., Costa, V.S.: NeuralLog: a neural logic language. CoRR, abs/2105.01442 (2021). http://arxiv.org/abs/2105.01442 14. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition. IEEE Sig. Process. Mag. 29, 82–97 (2012) 15. Hu, Z., Ma, X., Liu, Z., Hovy, E., Xing, E.: Harnessing deep neural networks with logic rules. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, Berlin, Germany, 7–12 August 2016, vol. 1. The Association for Computer Linguistics (2016). ISBN: 978-1-945626-00-5. http://aclweb.org/anthology/P/P16/P16-1228.pdf 16. Koller, D., et al.: Introduction to Statistical Relational Learning. MIT Press, Cambridge (2007) 17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, NIPS 2012, USA, vol. 1, pp. 1097– 1105. Curran Associates Inc. (2012). http://dl.acm.org/citation.cfm?id= 2999134. 2999257 18. Li, T., Srikumar, V.: Augmenting neural networks with first-order logic. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 292–302. Association for Computational Linguistics, July 2019. https://doi.org/10.18653/v1/P19-1028. https://www.aclweb.org/ anthology/P19-1028 Relational KENN 19. Lu, C., Krishna, R., Bernstein, M., Fei-Fei, L.: Visual relationship detection with language priors. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 852–869. Springer, Cham (2016). https://doi.org/10.1007/ 978-3-319-46448-0 51 20. Lu, Q., Getoor, L.: Link-based classification. In: Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ICML 2003, pp. 496–503. AAAI Press (2003). ISBN: 1577351894 21. Manhaeve, R., Dumancic, S., Kimmig, A., Demeester, T., De Raedt, L.: DeepProbLog: neural probabilistic logic programming. In: Advances in Neural Information Processing Systems, pp. 3749–3759 (2018) 22. Marra, G., Diligenti, M., Giannini, F., Gori, M., Maggini, M.: Relational neural machines. arXiv preprint arXiv:2002.02193 (2020) 23. Minervini, P., Riedel, S.: Adversarially regularising neural NLI models to integrate logical background knowledge. arXiv preprint arXiv:1808.08609 (2018) 24. Reimann, J.N., Schwung, A.: Neural logic rule layers. arXiv preprint arXiv:1907.00878 (2019) 25. Richardson, M., Domingos, P.: Markov logic networks. Mach. Learn. 62(1–2), 107– 136 (2006). ISSN: 0885-6125. https://doi.org/10.1007/s10994-006-5833-1 26. Rockt¨ aschel, T., Riedel, S.: Learning knowledge base inference with neural theorem provers. In: Proceedings of the 5th Workshop on Automated Knowledge Base Construction, pp. 45–50 (2016) 27. Rockt¨ aschel, T., Riedel, S.: End-to-end differentiable proving. In: Advances in Neural Information Processing Systems, pp. 3788–3800 (2017) 28. Sen, P., Namata, G.M., Bilgic, M., Getoor, L., Gallagher, B., Eliassi-Rad, T.: Collective classification in network data. AI Mag. 29(3), 93–106 (2008) 29. Serafini, L., d’Avila Garcez, A.: Logic tensor networks: deep learning and logical reasoning from data and knowledge. CoRR, abs/1606.04422 (2016) 30. Xu, J., Zhang, Z., Friedman, T., Liang, Y., Van den Broeck, G.: A semantic loss function for deep learning with symbolic knowledge. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, Volume 80 of Proceedings of Machine Learning Research, Stockholmsm¨ assan, Stockholm, Sweden, 10–15 July 2018, pp. 5502–5511. PMLR (2018). http://proceedings.mlr. press/v80/xu18h.html Logic Tensor Networks for Top-N Recommendation Tommaso Carraro1,2(B) , Alessandro Daniele2 , Fabio Aiolli1 , and Luciano Serafini2 1 Department of Mathematics, University of Padova, Padova, Italy [emailprotected] 2 Data and Knowledge Management Research Unit, Fondazione Bruno Kessler (FBK), Trento, Italy Abstract. Despite being studied for more than twenty years, state-ofthe-art recommendation systems still suffer from important drawbacks which limit their usage in real-world scenarios. Among the well-known issues of recommender systems, there are data sparsity and the cold-start problem. These limitations can be addressed by providing some background knowledge to the model to compensate for the scarcity of data. Following this intuition, we propose to use Logic Tensor Networks (LTN) to tackle the top-n item recommendation problem. In particular, we show how LTN can be used to easily and effectively inject commonsense recommendation knowledge inside a recommender system. We evaluate our method on MindReader, a knowledge graph-based movie recommendation dataset containing plentiful side information. In particular, we perform an experiment to show how the benefits of the knowledge increase with the sparsity of the dataset. Eventually, a comparison with a standard Matrix Factorization approach reveals that our model is able to reach and, in many cases, outperform state-of-the-art performance. Keywords: Recommender systems · top-n recommendation tensor networks · Neural-symbolic integration · Logic Recommender system (RS) technologies are nowadays an essential component for e-services (e.g., Amazon, Netflix, Spotify). Generally speaking, an RS aims at providing suggestions for items (e.g., movies, songs, news) that are most likely of interest to a particular user [25]. Since the first appearance of RSs in early 2000, Collaborative Filtering (CF) [1,16,28] has affirmed of being the standard recommendation approach. In particular, Latent Factor models, and especially Matrix Factorization (MF), have dominated the CF scene [14,20,22] for years, and this has been further emphasized with the deep learning rise [7,13,19,26,27]. Despite their success, state-of-the-art models still suffer from important drawbacks, which limit their applicability in real-world scenarios. Among the most c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 110–123, 2023. https://doi.org/10.1007/978-3-031-27181-6_8 Logic Tensor Networks for Top-N Recommendation crucial problems, there are data sparsity and the cold-start problem [21,25]. Data sparsity leads to datasets where the density of ratings is usually less than 1%, while cold-start makes the recommendation challenging for new users and items. One way to address these limitations is to provide additional information to the models to compensate for the scarcity of data. Following this intuition, methods based on Tensor Factorization [4] and Factorization Machines [24,30] have been proposed recently. These models allow to effectively extend the user-item matrix by adding new dimensions containing content (e.g., movie genres, demographic information) and/or contextual side information (e.g., location, time). Though these techniques have been shown to improve the recommendation performance, they are usually specifically designed for one type of side information (e.g., the user or item content) and lack explainability [6,31]. Novel recommendation datasets (e.g., [5]) provide manifold side information (e.g., ratings on movie genres, actors, directors), and hence models which can exploit all the available information are required. Neural-Symbolic Integration (NeSy) [3] and Statistical Relational Learning (SRL) [23] represent good candidates to incorporate knowledge with learning. These two branches of Artificial Intelligence study approaches for the integration of some form of prior knowledge, usually expressed through First-Order Logic (FOL), with statistical models. The integration has been shown beneficial to address data scarcity [11]. In this paper, we propose to use a Logic Tensor Network (LTN) [2] to inject commonsense knowledge into a standard Matrix Factorization model for the top-n item recommendation task. LTN is a NeSy framework that allows using logical formulas to instruct the learning of a neural model. We propose to use the MindReader dataset [5] to test our model. This dataset includes a variety of information, such as users’ tastes across movie genres, actors, and directors. In this work, we show how LTN can naturally and effectively exploit all this various information to improve the generalization capabilities of the MF model. In addition, an experiment that drastically reduces the density of the training ratings reveals that our model can effectively mitigate the sparsity of data, outperforming the standard MF model, especially in the most challenging scenarios. Related Works The integration of logical reasoning and learning in RSs is still in its early stages. Among the NeSy approaches for RSs, the most prominent is Neural Collaborative Reasoning (NCR) [10]. In this work, the recommendation problem is formalized into a logical reasoning problem. In particular, the user’s ratings are represented using logical variables, then, logical operators are used to construct formulas that express facts about them. Afterward, NCR maps the variables to logical embeddings and the operators to neural networks which act on those embeddings. By doing so, each logical expression can be equivalently organized as a neural network, so that logical reasoning and prediction can be conducted in a continuous space. In [9], the idea of NCR is applied to knowledge graphs for RSs, while [29] uses a NeSy approach to tackle the explainability of RSs. T. Carraro et al. The seminal approach that successfully applied SRL to RSs has been HyPER [17], which is based on Probabilistic Soft Logic (PSL) [15]. In particular, HyPER exploits the expressiveness of FOL to encode knowledge from a wide range of information sources, such as multiple user and item similarity measures, content, and social information. Then, Hinge-Loss Markov Random Fields are used to learn how to balance the different information types. HyPER is highly related to our work since the logical formulas that we use resemble the ones used in HyPER. After HyPER, other SRL approaches have been proposed for RSs [8,12]. This section provides useful notation and terminology used in the remainder of the paper. 3.1 Bold notation is used to differentiate between vectors, e.g., x = [3.2, 2.1], and scalars, e.g., x = 5. Matrices and tensors are denoted with upper case bold notation, e.g., X. Then, Xi is used to denote the i-th row of X, while Xi,j to denote the position at row i and column j. We refer to the set of users of a RS with U, where |U| = n. Similarly, the set of items is referred to as I such that |I| = m. We use D to denote a dataset. D is defined as a set of N triples D = {(u, i, r)(j) }N j=1 , where u ∈ U, i ∈ I, and r ∈ N is a rating. We assume that a user u cannot give more than one rating to an item i, namely r1 , r2 ∈ N, r1 = r2 : {(u, i, r1 )} ∪ {(u, i, r2 )} ⊆ D. D can be reorganized in the so-called user-item matrix R ∈ Nn×m , where users are on the rows and items on the columns, such that Ru,i = r if (u, i, r) ∈ D, 0 otherwise. 3.2 Matrix Factorization Matrix Factorization (MF) is a Latent Factor Model that aims at factorizing the user-item matrix R into the product of two lower-dimensional rectangular matrices, denoted as U and I. U ∈ Rn×k and I ∈ Rm×k are matrices containing the users’ and items’ latent factors, respectively, where k is the number of latent factors. The objective of MF is to find U and I such that R ≈ U·I . An effective way to learn the latent factors is by using gradient-descent optimization. Given the dataset D, a MF model seeks to minimize the following loss function: L(θ) = 1 N ||˜ r − r||2 + λ||θ||2 where r˜ = Uu · I i and θ = {U, I}. The first term of Eq. (1) is the Mean Squared Error (MSE) between the predicted and target ratings, while the second one is an L2 regularization term. λ is an hyper-parameter to set strength of the regularization. Logic Tensor Networks for Top-N Recommendation Logic Tensor Networks Logic Tensor Networks [2] (LTN) is a Neural-Symbolic framework that enables effective integration of deep learning and logical reasoning. It allows to define a knowledge base composed of a set of logical axioms and to use them as the objective of a neural model. To define the knowledge base, LTN uses a specific first-order language, called Real Logic, which forms the basis of the framework. It is fully differentiable and has a concrete semantics that allows mapping every symbolic expression into the domain of real numbers. Thanks to Real Logic, LTN can convert logical formulas into computational graphs that enable gradientbased optimization based on fuzzy logic semantics. Real Logic is defined on a first-order language L with a signature that contains a set C of constant symbols, a set X of variable symbols, a set F of functional symbols, and a set P of predicate symbols. A term is constructed recursively from constants, variables, and functional symbols. An expression formed by applying a predicate symbol to some term(s) is called an atomic formula. Complex formulas are constructed recursively using connectives (i.e., ¬, ∧, ∨, =⇒ , ↔) and quantifiers (i.e., ∀, ∃). To emphasize the fact that symbols are mapped onto real-valued features, we use the term grounding 1 , denoted by G. In particular, each individual (e.g., a user) is grounded as a tensor of real features (e.g., user’s demographic information), functions as real functions, and predicates as real functions that specifically project onto a value in the interval [0, 1]. A variable x is grounded to a sequence of nx individuals from a domain, with nx ∈ N+ , nx > 0. As a consequence, a term t(x) or a formula P(x), constructed recursively with a free variable x, will be grounded to a sequence of nx values too. Afterward, connectives are grounded using fuzzy semantics, while quantifiers using special aggregation functions. In this paper, we use the product configuration, which is better suited for gradientbased optimization [18]. Specifically, conjunctions are grounded using the product t-norm Tprod , negations using the standard fuzzy negation NS , implications using the Reichenbach implication IR , and the universal quantifier using the generalized mean w.r.t. the error values MEp . The other connectives and quantifiers are not used in this paper, hence not reported. Tprod (u, v) = u ∗ v, u, v ∈ [0, 1] IR (u, v) = 1 − u + u ∗ v, u, v ∈ [0, 1] NS (u) = 1 − u, u ∈ [0, 1] n 1 1 MEp (u1 , . . . , un ) = 1 − ( (1 − ui )p ) p , p ≥ 1, u1 , . . . , un ∈ [0, 1] n i=1 Connective operators are applied element-wise to the tensors in input, while aggregators aggregate the dimension of the tensor in input that corresponds to 1 Notice that this is different from the common use of the term grounding in logic, which indicates the operation of replacing the variables of a term or formula with constants or terms containing no T. Carraro et al. the quantified variable. Real Logic provides also a special type of quantification, called diagonal quantification, denoted as Diag(x1 , . . . , xn ). It applies only to variables that have the same number of individuals (i.e., nx1 = nx2 = · · · = nxn ) and allows to quantify over specific tuples of individuals, such that the i-th tuple contains the i-th individual of each of the variables in the argument of Diag. An intuition about how these operations work in practice is given in Sect. 3.4. Given a Real Logic knowledge base K = {φ1 , . . . , φn }, where φ1 , . . . , φn are closed formulas, LTN allows to learn the grounding of constants, functions, and predicates appearing in them. In particular, if constants are grounded as embeddings, and functions/predicates onto neural networks, their grounding G depends on some learnable parameters θ. We denote a parametric grounding as G(·|θ). In LTN, the learning of parametric groundings is obtained by finding parameters θ ∗ that maximize the satisfaction of K: θ ∗ = argmaxθ SatAggφ∈K G(φ|θ) where, SatAgg : [0, 1]∗ → [0, 1] is a formula aggregating operator, often defined using MEp . Because Real Logic grounds expressions in real and continuous domains, LTN attaches gradients to every sub-expression and consequently learns through gradient-descent optimization. 3.4 Intuition of Real Logic Grounding In Real Logic, differently from first-order logic, a variable x is grounded as a sequence of nx individuals (i.e., tensors) from a domain, with nx ∈ N+ , nx > 0. As a direct consequence, a term t(x) or a formula P(x), with a free variable x, is grounded to a sequence of nx values too. For example, P(x) returns a vector x , where xi is the i-th individual of x. Similarly, in [0, 1]nx , namely P(xi ) ni=1 ny ×z , assuming that t maps to individuals in Rz . This t(y) returns a matrix in R formalization is intuitively extended to terms and formulas with arity greater than one. In such cases, Real Logic organizes the output tensor in such a way that it has a dimension for each free variable involved in the expression. For instance, t2 (x, y) returns a tensor in Rnx ×ny ×z , assuming that t2 maps to individuals in Rz . In particular, at position (i, j) there is the evaluation of t2 (xi , yj ), where xi denotes the i-th individual of x and yj the j-th individual of y. Similarly, P2 (x, y) returns a tensor in [0, 1]nx ×ny , where at position (i, j) there is the evaluation of P(xi , yj ). The connective operators are applied element-wise to the tensors in input. For instance, ¬P2 (x, y) returns a tensor in [0, 1]nx ×ny , where at position (i, j) there is the evaluation of ¬P2 (xi , yj ), namely NS (i.e., ¬) is applied to each truth value in the tensor P2 (x, y) ∈ [0, 1]nx ×ny . For binary connectives, the behavior is similar. For instance, let Q be a predicate symbol and u a variable. Then, P2 (x, y) ∧ Q(x, u) returns a tensor in [0, 1]nx ×ny ×nu , where at position (i, j, k) there is the evaluation of the formula on the i-th individual of x, j-th individual of y, and k-th individual of u. Logic Tensor Networks for Top-N Recommendation The quantifiers aggregate the dimension that corresponds to the quantified variable. For instance, ∀xP2 (x, y) returns a tensor in [0, 1]ny , namely the aggregation is performed across the dimension of x. Since y is the only free variable remaining in the expression, the output has one single dimension, corresponding to the dimension of y. Specifically, the framework computes P2 (x, y) ∈ [0, 1]nx ×ny first, then it aggregates the dimension corresponding to x. Similarly, ∀(x, y)P2 (x, y) returns a scalar in [0, 1], namely the aggregation is performed across the dimensions of both variables x and y. In the case of diagonal quantification, the framework behaves differently. For instance, ∀Diag(w, v)P2 (w, v), where w and v are two variables with the same number of individuals nw = nv , returns a scalar in [0, 1], which is the result of the aggregation of nw truth values, namely P2 (w1 , v1 ), P2 (w2 , v2 ), . . . , P2 (wnw , vnv ). Without diagonal quantification (i.e., ∀(w, v)P2 (w, v)), the framework performs an aggregation across the dimensions of both variables, involving n2w values, namely P2 (w1 , v1 ), P2 (w1 , v2 ), . . . , P2 (wnw , vnv −1 ), P2 (wnw , vnv ). Intuitively, ∀(w, v) aggregates all the values in [0, 1]nw ×nv , while ∀Diag(w, v) aggregates only the values in the diagonal. Our approach uses a Logic Tensor Network to train a basic Matrix Factorization (MF) model for the top-n item recommendation task. The LTN is trained using a Real Logic knowledge base containing commonsense knowledge facts about the movie recommendation domain. This section formalizes the knowledge base used by our model, how the symbols appearing in it are grounded in the real field, and how the learning of the LTN takes place. 4.1 Knowledge Base The Real Logic knowledge base that our model seeks to maximally satisfy is composed of the following axioms. φ1 : ∀Diag(user, movie, rating)(Sim(Likes(user, movie), rating)) φ2 : ∀(user, movie, genre)(¬LikesGenre(user, genre) ∧ HasGenre(movie, genre) =⇒ Sim(Likes(user, movie), rating− )) (3) (4) where user, movie, rating, and genre are variable symbols to denote the users of the system, the items of the system, the ratings given by the users to the items, and the genres of the movies, respectively. rating− is a constant symbol denoting the negative rating. Likes(u, m) is a functional symbol returning the prediction for the rating given by user u to movie m. Sim(r1 , r2 ) is a predicate symbol measuring the similarity between two ratings, r1 and r2 . LikesGenre(u, g) is a T. Carraro et al. predicate symbol denoting whether the user u likes the genre g. HasGenre(m, g) is a predicate symbol denoting whether the movie m belongs to the genre g. Notice the use of the diagonal quantification on Axiom (3). When user, movie, and rating are grounded with three sequences of values, the i-th value of each variable matches with the values of the other variables. This is useful in this case since the dataset D comes as a set of triples. Diagonal quantification allows forcing the satisfaction of Axiom (3) for these triples only, rather than any combination of users, items, and ratings in D. 4.2 Grounding of the Knowledge Base The grounding allows to define how the symbols of the language are mapped onto the real field, and hence how they can be used to construct the architec(j) N ture of the LTN. In particular, given D = {(u, m, r)}N j=1 , G(user) = u j=1 , namely user is grounded as a sequence of the N user indexes in D. G(movie) = m(j) N j=1 , namely movie is grounded as a sequence of the N movie indexes (j) ∈ {0, 1} ∀j, namely rating is grounded in D. G(rating) = r(j) N j=1 with r as a sequence of the N ratings in D, where 0 denotes a negative rating and 1 a positive one. G(rating− ) = 0, namely rating− is grounded as the negative rating. G(genre) = 1, . . . , Ng , namely genre is grounded as a sequence of Ng genre indexes, where Ng is the number of genres appearing in the movies of D. G(Likes|U, I) : u, m → Uu · I m , namely Likes is grounded onto a function that takes as input a user index u and a movie index m and returns the prediction of the MF model for user at index u and movie at index m, where U ∈ Rn×k and Im×k are the matrices of the users’ and items’ latent factors, respectively. G(LikesGenre) : u, g → {0, 1}, namely LikesGenre is grounded onto a function that takes as input a user index u and a genre index g and returns 1 if the user u likes the genre g in the dataset, 0 otherwise. Similarly, G(HasGenre) : m, g → {0, 1}, namely HasGenre is grounded onto a function that takes as input a movie index m and a genre index g and returns 1 if the movie m belongs to genre g in the dataset, 0 otherwise. Finally, G(Sim) : r˜, r → exp(−α||˜ r − r||2 ), namely Sim is grounded onto a function that computes the similarity between a predicted rating r˜ and a target rating r. The use of the exponential allows to treat Sim as a predicate since the output is restricted in the interval [0, 1]. The squared is used to give more penalty to larger errors in the optimization. α is an hyper-parameter to change the smoothness of the function. Intuitively, Axiom (3) states that for each user-movie-rating triple in the dataset D = {(u, m, r)(j) }N j=1 , the prediction computed by the MF model for the user u and movie m should be similar to the target rating r provided by the user u for the movie m. Instead, Axiom (4) states that for each possible combination of users, movies, and genres, taken from the dataset, if the user u does not like a genre of the movie m, then the prediction computed by the MF model for the user u and movie m should be similar to the negative rating rating− , namely the user should not like the movie m. By forcing the satisfaction of Axiom (3), the model learns to factorize the user-item matrix using the ground Logic Tensor Networks for Top-N Recommendation truth, while Axiom (4) acts as a kind of regularization for the latent factors of the MF model. 4.3 Learning of the LTN The objective of our LTN is to learn the latent factors in U and I such that the axioms in the knowledge base K = {φ1 , φ2 } are maximally satisfied, namely argmaxθ SatAggφ∈K G(user,movie,rating)←D (φ |θ), where θ = {U, I}. The notation (user, movie, rating) ← D means that variables user, movie, and rating are grounded with the triples taken from the dataset D, namely user takes the sequence of user indexes, movie the sequence of movie indexes, and rating the sequence of ratings. In practice, this objective corresponds to the following loss function: L(θ) = (1 − SatAggφ∈K G (user,movie,rating)←B (φ|θ)) + λ||θ||2 where B denotes a batch of training triples randomly sampled from D. An L2 regularization term has been added to the loss to prevent overfitting. Hyperparameter λ is used to define the strength of the regularization. Notice that the loss does not specify how the variable genre is grounded. Its grounding depends on the sampled batch B. In our experiments, we grounded it with the sequence of genres of the movies in the batch. It is worth highlighting that the loss function depends on the semantics used to approximate the logical connectives, quantifiers, and formula aggregating operator. In our experiments, we used the stable product configuration, a stable version of the product configuration introduced in [2]. Then, we selected MEp as formula aggregating operator, with p = 2. This section presents the experiments we have performed with our method. They have been executed on an Apple MacBook Pro (2019) with a 2,6 GHz 6-Core Intel Core i7. The model has been implemented in Python using PyTorch. In particular, we used the LTNtorch2 library. Our source code is freely available3 . 5.1 In our experiments, we used the MindReader [5] dataset. It contains 102,160 explicit ratings collected from 1,174 real users on 10,030 entities (e.g., movies, actors, movie genres) taken from a knowledge graph in the movie domain. The explicit ratings in the dataset can be of three types: like (1), dislike (−1), or unknown (0). The dataset is subdivided in 10 splits. In our experiments, we used split 0. Each split has a training set, a validation set, and a test set. The 2 3 https://github.com/logictensornetworks/LTNtorch. https://github.com/tommasocarraro/LTNrec. T. Carraro et al. training set contains both ratings given on movies and on the other entities, while validation and test sets contain only ratings given on movies. The validation and test sets are built in such a way to perform a leave-one-out evaluation. In particular, for each user of the training set, one random positive movie rating is held out for the validation set, and one for the test set. The validation/ test example of the user is completed by adding 100 randomly sampled negative movie ratings from the dataset. To improve the quality of the dataset, we removed the unknown ratings. Moreover, we removed the top 2% of popular movies from the test set to reduce the popularity bias and hence see how the model performs on non-trivial recommendations, as suggested in [5]. Afterward, we considered only the training ratings given on movies and movie genres since our model uses only this information. After these steps, we converted the negative ratings from -1 to 0. Our final dataset contains 962 users, 3,034 movies, 164 genres, 16,351 ratings on movies, and 10,889 ratings on movie genres. The density of the user-movie ratings is 0.37%. 5.2 Experimental Setting In our experiments, we compared the performance of three models: (1) a standard MF model trained on the movie ratings of MindReader using Eq. (1), denoted as MF, (2) a LTN model trained on the movie ratings of MindReader using Eq. (5) with K = {φ1 }, denoted as LTN, and (3) a LTN model trained on the movie and genre ratings of MindReader using Eq. (5) with K = {φ1 , φ2 }, denoted as LTNgenres . To compare the performance of the models, we used two widely used ranking-based metrics, namely hit@k and ndcg@k, explained in Sect. 5.3. In our experiments, we used the following procedure: (1) we generated additional training sets by randomly sampling the 80%, 60%, 40%, and 20% of the movie ratings of each user from the entire training set, referred to as 100%. Then, (2) for each training set T r ∈ {100%, 80%, 60%, 40%, 20%} and for each model m ∈ {MF, LTN, LTNgenres }: (2a) we performed a grid search of model m on training set T r to find the best hyper-parameters on the validation set using hit@10 as validation metric; then, (2b) we tested the performance of the best model on the test set in terms of hit@10 and ndcg@10. We repeated this procedure 30 times using seeds from 0 to 29. The test metrics have been averaged across these runs and reported in Table 1. Due to computational time, the grid search has been computed only for the first run. Starting from the second run, step (2a) is replaced with the training of model m on the training set T r with the best hyper-parameters found during the first run. A description of the hyperparameters tested in the grid searches as well as the training details of the models is explained in Sect. 5.4. 5.3 Evaluation Metrics The selected ranking-based metrics are defined as follows: – hit@k: Hit Ratio measures whether a testing item is placed in the top-k positions of the ranking, considering the presence of an item as a Logic Tensor Networks for Top-N Recommendation – ndcg@k: Normalized Discounted Cumulative Gain measures the quality of the recommendation based on the position of the target item in the ranking. In particular, it uses a monotonically increasing discount to emphasize the importance of higher ranks versus lower ones. Formally, let us define ω(r) as the item at rank r, I[·] as the indicator function, and Iu as the set of held-out items for user u. hit@k for user u is defined as k hit@k(u, ω) := I I [ω(r) ∈ Iu ] ≥ 1 . r=1 Truncated discounted cumulative gain (dcg@k) for user u is defined as dcg@k(u, ω) := k 2I[ω(r)∈Iu ] − 1 r=1 log(r + 1) ndcg@k is the dcg@k linearly normalized to [0, 1] after dividing by the best possible dcg@k, where all the held-out items are ranked at the top. Notice that in this paper |Iu | = 1. Specifically, for each validation/test example, the scores for the positive movie and the 100 randomly sampled negative movies are computed using the Likes(u, m) function (i.e., the dot product between user and movie latent factors). Then, a ranking is created based on these scores. The metrics evaluate the recommendation based on the position of the positive movie in the produced ranking. 5.4 Training Details The hyper-parameters tested during the grid searches explained in Sect. 5.2 vary depending on the model. For all the models, we tried a number of latent factors k ∈ {1, 5, 10, 25}, regularization coefficient λ ∈ {0.001, 0.0001}, batch size in {32, 64}, and whether it was better to add users’ and items’ biases to the model. For LTN and LTNgenres , we tried α ∈ {0.05, 0.1, 0.2} for the predicate Sim and used p = 2 for the aggregator MEp of Axiom (3). For LTNgenres , we tried p ∈ {2, 5} for the aggregator MEp of Axiom (4). Notice that limp→∞ MEp (u1 , . . . , un ) = min{u1 , . . . , un }. Intuitively, p offers flexibility to account for outliers in the data. The higher the p, the more focus the model will have on the outliers. For all the models, the latent factors U and I, for users and items, respectively, have been randomly initialized using the Glorot initialization, while the biases with values sampled from a normal distribution with 0 mean and unitary variance. All the models have been trained for 200 epochs by using the Adam optimizer with a learning rate of 0.001. For each training, we used early stopping to stop the learning if after 20 epochs no improvements were found on the validation metric (i.e., hit@10). T. Carraro et al. A comparison between MF, LTN, and LTNgenres is reported in Table 1. The table reports the performance of the three models on a variety of tasks with different sparsity. Table 1. Test hit@10 and ndcg@10 averaged across 30 runs. Standard deviations are between brackets. % of training ratings Metric hit@10 0.4499(0.0067) 0.4636(0.0040) ndcg@10 0.1884(0.0028) 0.1899(0.0014) 0.4642(0.0054) 0.1905(0.0022) hit@10 0.4459(0.0057) 0.4585(0.0066) ndcg@10 0.1864(0.0023) 0.1881(0.0023) 0.4616(0.0069) 0.1894(0.0025) hit@10 0.4274(0.0107) 0.4475(0.0087) ndcg@10 0.1798(0.0039) 0.1853(0.0034) 0.4487(0.0080) 0.1862(0.0031) hit@10 0.3983(0.0105) 0.4087(0.0117) ndcg@10 0.1692(0.0047) 0.1726(0.0052) 0.4322(0.0102) 0.1807(0.0049) hit@10 0.2956(0.0196) 0.3764(0.0170) 0.3761(0.0160) ndcg@10 0.1367(0.0093) 0.1594(0.0069) 0.1598(0.0068) By looking at the table, it is possible to observe that LTN outperforms MF in all the five tasks. In particular, for the dataset with 20% of training ratings, the improvement is drastic, with a 27.33% increase on hit@10. We want to emphasize that the two models only differ in the loss function. This demonstrates that the loss based on fuzzy logic semantics of LTN is beneficial to deal with the sparsity of data. Then, with the addition of knowledge regarding the users’ tastes across the movie genres, it is possible to further improve the results, as shown in the last column of the table. LTNgenres outperforms the other models on almost all the tasks. For the dataset with the 20% of the ratings, the hit@10 of LTNgenres is slightly worse compared to LTN. This could be related to the quality of the training ratings sampled from the original dataset. This is also suggested by the higher standard deviation associated with the datasets with higher sparsity. 6.1 Training Time A comparison of the training times required by the models on the different datasets is presented in Table 2. The models have been trained for 200 epochs with a learning rate of 0.001, batch size of 64, one latent factor (i.e., k = 1), without bias terms, and without early stopping. The other hyper-parameters do not affect training time. In particular, LTNgenres increases the time complexity considerably. This is due to Axiom 4, which has to be evaluated for each possible combination of users, items, and genres. This drawback can limit the applicability of LTNgenres in datasets with a higher number of users and items since more groundings of the formula have to be evaluated. Generally, when the number of Logic Tensor Networks for Top-N Recommendation groundings becomes huge, Logic Tensor Networks have scalability issues. However, it is possible to mitigate this problem by designing logical axioms which make use of diagonal quantification. This special quantification allows to considerably reduce the number of evaluated groundings by explicitly specifying them. Eventually, by looking at the results in Sect. 6, it is possible to observe that the improvements of LTNgenres w.r.t. LTN are marginal. This proves that LTN can implicitly learn user preferences among movie genres without direct supervision. This finding suggests avoiding using LTNgenres in this particular scenario since the underlying MF model is powerful enough while being also more efficient. We believe that LTNgenres is best suited for extremely sparse datasets and cold-start scenarios. We leave this investigation for future work. Table 2. Training time in seconds. % of training ratings MF LTN LTNgenres 26.99 50.87 247.30 22.52 37.79 213.62 18.31 28.97 145.86 15.60 20.09 8.12 10.68 In this paper, we proposed to use Logic Tensor Networks to tackle the top-n recommendation task. We showed how, by design, LTN permits to easily integrate side information inside a recommendation model. We compared our LTN models with a standard MF model, in a variety of tasks with different sparsity, showing the benefits provided by the background knowledge, especially when the task is challenging due to data scarcity. References 1. Aiolli, F.: Efficient top-n recommendation for very large scale binary rated datasets. In: Proceedings of the 7th ACM Conference on Recommender Systems, RecSys 2013, pp. 273–280. Association for Computing Machinery, New York (2013). https://doi.org/10.1145/2507157.2507189 2. Badreddine, S., d’Avila Garcez, A., Serafini, L., Spranger, M.: Logic tensor networks. Artif. Intell. 303, 103649 (2022). https://doi.org/10.1016/j.artint.2021. 103649 3. Besold, T.R., et al.: Neural-symbolic learning and reasoning: a survey and interpretation (2017). https://doi.org/10.48550/ ARXIV.1711.03902 4. Bhargava, P., Phan, T., Zhou, J., Lee, J.: Who, what, when, and where: multidimensional collaborative recommendations using tensor factorization on sparse user-generated data. In: Proceedings of the 24th International Conference on World Wide Web, WWW 2015, pp. 130–140. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE (2015). https://doi. org/10.1145/2736277.2741077 T. Carraro et al. 5. Brams, A.H., Jakobsen, A.L., Jendal, T.E., Lissandrini, M., Dolog, P., Hose, K.: MindReader: recommendation over knowledge graph entities with explicit user ratings. In: CIKM 2020, pp. 2975–2982. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3340531.3412759 6. Carraro, T., Polato, M., Aiolli, F.: A look inside the black-box: towards the interpretability of conditioned variational autoencoder for collaborative filtering. In: Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2020 Adjunct, pp. 233–236. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3386392.3399305 7. Carraro, T., Polato, M., Bergamin, L., Aiolli, F.: Conditioned variational autoencoder for top-n item recommendation. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds.) ICANN 2022. LNCS, vol. 13530, pp. 785–796. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-031-15931-2 64 8. Catherine, R., Cohen, W.: Personalized recommendations using knowledge graphs: a probabilistic logic programming approach. In: Proceedings of the 10th ACM Conference on Recommender Systems, RecSys 2016, pp. 325–332. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2959100.2959131 9. Chen, H., Li, Y., Shi, S., Liu, S., Zhu, H., Zhang, Y.: Graph collaborative reasoning. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, WSDM 2022, pp. 75–84. Association for Computing Machinery, New York (2022). https://doi.org/10.1145/3488560.3498410 10. Chen, H., Shi, S., Li, Y., Zhang, Y.: Neural collaborative reasoning. In: Proceedings of the Web Conference 2021. ACM, April 2021. https://doi.org/10.1145/ 3442381. 3449973 11. Daniele, A., Serafini, L.: Neural networks enhancement with logical knowledge (2020). https://doi.org/10.48550/ARXIV.2009.06087 12. Gridach, M.: Hybrid deep neural networks for recommender systems. Neurocomputing 413, 23–30 (2020). https://doi.org/10.1016/j.neucom.2020.06.025. https:// www.sciencedirect.com/science/article/pii/S0925231220309966 13. He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.S.: Neural collaborative filtering. In: Proceedings of the 26th International Conference on World Wide Web, WWW 2017, pp. 173–182. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE (2017). https://doi.org/10. 1145/3038912.3052569 14. Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedback datasets. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 263–272 (2008). https://doi.org/10.1109/ICDM.2008.22 15. Kimmig, A., Bach, S., Broecheler, M., Huang, B., Getoor, L., Mansinghka, V.: A short introduction to probabilistic soft logic, pp. 1–4 (2012). https://lirias. kuleuven.be/retrieve/204697 16. Koren, Y., Bell, R.: Advances in collaborative filtering. In: Ricci, F., Rokach, L., Shapira, B., Kantor, P. (eds.) Recommender Systems Handbook, pp. 145–186. Springer, Boston (2011). https://doi.org/10.1007/978-0-387-85820-3 5 17. Kouki, P., Fakhraei, S., Foulds, J., Eirinaki, M., Getoor, L.: HyPER: a flexible and extensible probabilistic framework for hybrid recommender systems. In: RecSys 2015, pp. 99–106. Association for Computing Machinery, New York (2015). https:// doi.org /10.1145/2792838.2800175 18. van Krieken, E., Acar, E., van Harmelen, F.: Analyzing differentiable fuzzy logic operators. Artif. Intell. 302, 103602 (2022). https://doi.org/10.1016/j.artint.2021. Logic Tensor Networks for Top-N Recommendation 19. Liang, D., Krishnan, R.G., Hoffman, M.D., Jebara, T.: Variational autoencoders for collaborative filtering. In: Proceedings of the 2018 World Wide Web Conference, WWW 2018, pp. 689–698. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE (2018). https://doi.org/10. 1145/3178876.3186150 20. Ning, X., Karypis, G.: SLIM: sparse linear methods for top-n recommender systems. In: 2011 IEEE 11th International Conference on Data Mining, pp. 497–506 (2011). https://doi.org/10.1109/ICDM.2011.134 21. Polato, M., Aiolli, F.: Exploiting sparsity to build efficient kernel based collaborative filtering for top-n item recommendation. Neurocomputing 268, 17–26 (2017). Advances in Artificial Neural Networks, Machine Learning and Computational Intelligence. https://doi.org/10.1016/j.neucom.2016.12.090. https://www. sciencedirect.com/science/article/pii/S0925231217307592 22. Polato, M., Aiolli, F.: Boolean kernels for collaborative filtering in top-n item recommendation. Neurocomput. 286(C), 214–225 (2018). https://doi.org/10.1016/j. neucom.2018.01.057 23. Raedt, L.D., Kersting, K.: Statistical relational learning. In: Sammut, C., Webb, G.I. (eds.) Encyclopedia of Machine Learning, pp. 916–924. Springer, Boston (2010). https://doi.org/10.1007/978-0-387-30164-8 786 24. Rendle, S.: Factorization machines. In: 2010 IEEE International Conference on Data Mining, pp. 995–1000 (2010). https://doi.org/10.1109/ICDM.2010.127 25. Ricci, F., Rokach, L., Shapira, B.: Recommender systems: introduction and challenges. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 1–34. Springer, Boston (2015). https://doi.org/10.1007/978-1-48997637-6 1 26. Shenbin, I., Alekseev, A., Tutubalina, E., Malykh, V., Nikolenko, S.I.: RecVAE: a new variational autoencoder for top-n recommendations with implicit feedback. In: Proceedings of the 13th International Conference on Web Search and Data Mining. ACM, January 2020. https://doi.org/10.1145/3336191.3371831 27. Steck, H.: Embarrassingly shallow autoencoders for sparse data. In: The World Wide Web Conference, WWW 2019, pp. 3251–3257. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3308558.3313710 28. Su, X., Khoshgoftaar, T.M.: A survey of collaborative filtering techniques. Adv. Artif. Intell. (2009). https:// doi.org/10.1155/2009/421425 29. Xian, Y., et al.: CAFE: coarse-to-fine neural symbolic reasoning for explainable recommendation. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, CIKM 2020, pp. 1645–1654. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3340531. 3412038 30. Xin, X., Chen, B., He, X., Wang, D., Ding, Y., Jose, J.: CFM: convolutional factorization machines for context-aware recommendation. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI2019, pp. 3926–3932. International Joint Conferences on Artificial Intelligence Organization, July 2019. https://doi.org/10.24963/ijcai.2019/545 31. Zhang, Y., Chen, X.: Explainable recommendation: a survey and new perspecR Inf. Retrieval 14(1), 1–101 (2020). https://doi.org/10.1561/ tives. Found. Trends 1500000066 Multiagent Systems A Review of the Muddy Children Problem Yusuf Izmirlioglu(B) , Loc Pham, Tran Cao Son, and Enrico Pontelli New Mexico State University, Las Cruces, NM 88003, USA {yizmir,locpham}@nmsu.edu, Abstract. The “Muddy Children” puzzle is a well known problem in the multi-agent epistemic reasoning literature, however it has not been studied in other fields of Artificial Intelligence. In this paper, we present the “Muddy Children” problem as a challenge to the Artificial Intelligence and Computer Science community. The interesting aspect of this problem is that agents have asymmetric and incomplete information; and each agent needs to reason about his own knowledge as well as knowledge of other agents. The existing solutions use Kripke structure and possible world semantics which are not scalable for large problem sizes. Hence we stimulate for alternative solution methodologies and discover its relation to the other problems in the applied sciences. We go over several variations of the Muddy Children puzzle and discuss the challenges for future research. Keywords: Muddy children · Multi-agent systems reasoning · Analytical puzzles · Epistemic In this paper, we present the “Muddy Children” problem as a challenge to the general Artificial Intelligence and Computer Science community. The Muddy Children is a well-known puzzle in the multi-agent epistemic reasoning literature, however it has not been studied in other fields of Artificial Intelligence. This problem has been originally introduced by [2]; it also appears in the literature under different names, such as “Three Wise Men” [12] and “Coloured Hat” [14]. The interesting aspect of this problem is that agents have asymmetric and incomplete information and they cannot directly disclose their knowledge to the other agents. Rather, an agent can only learn partial knowledge of others through their actions. Thus agents need to perform sophisticated reasoning of available information to reach the actual state. In particular, this puzzle requires not only reasoning of an agent about himself, but also reasoning about other agents. That is, an agent need to put himself “in the shoes of others” to infer their knowledge about the world. There are several existing solutions to this puzzle using possible world semantics and epistemic reasoning. These solutions employ Kripke structures as representation of agents’ knowledge, which has exponential number of worlds in the c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 127–139, 2023. https://doi.org/10.1007/978-3-031-27181-6_9 Y. Izmirlioglu et al. number of agents. As such, they are not scalable for larger problem sizes. Furthermore, the existing methods cannot offer complete solutions to the variations of the problem which we present in this paper. Our objective in introducing the Muddy Children problem is to suggest research challenges and inspire new solution methodologies. We believe that the Muddy Children puzzle may have alternative or more efficient solutions using methodologies in Game Theory, Dynamic Programming, Constraint Programming or other fields. In the rest of the paper, we first explain the problem, its formal definition, followed by possible variations and then briefly go over the existing solutions. The Muddy Children Problem For understandability, let us first illustrate a particular instance of the Muddy Children problem with 3 children. We assume that all agents are truthful and they are perfect reasoners. Each child can hear the announcements and observe the actions of other agents. The children have played outside and then returned to home together. Their father looks at the children and tells them that at least one of them has mud on his forehead. Each child can see the forehead of other children but not his own. Consequently a child can observe whether the other children are muddy or not, but cannot identify his own status. The father asks the following question to the children: “Those of you who know whether he has mud on his own head or not, please raise your hand”. But no children raises their hands. The father asks the same question the second time, again no children raises their hands. However, when the father asks the same question the third time, all children raise their hands. How is this outcome possible and how did the children understand their status at the third time? The resolution of the puzzle with 3 children is as follows: After the father’s initial announcement, it is common knowledge that the number of muddy children is 1 or more. Similarly, at each round, raising or not raising hand of a child is also a public announcement action which reveals his knowledge to the other agents. After child i executes (not) raising hand action, it is common knowledge that i does (not) know whether he is muddy or not. At round 1, child 1 does not raise his hand, so it must be the case that at least one of child 2 or child 3 is muddy. Otherwise child 1 would infer that he is muddy since there is at least one muddy child. Child 2 and 3 do not raise their hand hence at least one of child 1 or child 3 must be muddy and at least one of child 1 or child 2 must be muddy. In sum, at the end of round 1, we understand that at least two children are muddy. Still, no children knows whether he is muddy or not since none of them raised their hands at round 2. Now suppose that exactly two children are muddy, say children 1 and 2. If this was the case, the two muddy children would raise their hands at round 2. The reasoning is as follows: At the end of round 1, child 1 would realize that he is muddy. Child 1 can observe that child 2 is muddy and child 3 is not muddy. He will think that “If I were not muddy, then child 2 would raise his hand at round 1. Therefore I Muddy Children Problem must be muddy”. The situation is symmetric for the other muddy child 2, so he would also understand that he is muddy at the end of round 1. Therefore the number of muddy children cannot be two and hence all children must be muddy. Until now, we have made an analysis as an outsider who only reads the narrative but does not know the status of the children beforehand. Let us look at the puzzle from the viewpoint of the individual children. We first examine the case of child 1. At the beginning of round 0, child 1 can observe that child 2 and 3 are muddy hence the announcement action of his father does not change his beliefs. At the beginning of round 1, child 1 does not know whether he is muddy or not, and does not raise his hand. Child 1 also knows that child 2 observes that child 3 is muddy and vice versa, so the other children did not raise their hand in the first round as he expected. After round 1, child 1 still does not know whether he is muddy or not, so he does not raise his hand in round 2. The other children did not raise their hands either. Then after the actions in round 2, child 1 performs the following reasoning: “If I were not muddy, then child 2 and 3 would raise their hand in round 2; because in round 1 no one raised their hands. Assuming that I am not muddy, at the beginning of round 2, child 2 would think that if he were not muddy child 3 would have raised his hand in round 1. Hence child 2 would understand that he is muddy and raise his hand at round 2. Since this did not happen, I must be muddy!” Therefore at the beginning of round 3, child 1 realizes that he is muddy and he raises his hand at this round. Since all children are muddy and their actions are the same at every round, the analysis is symmetric for children 2 and 3. In the previous section, we have made an analysis for a specific instance of the problem with 3 children and all of them muddy. What would be the outcome if the number of children or muddy children were different? We now provide some results about the game with different parameters. Theorem 1. In the muddy children problem, suppose that there are n children and of them are muddy, 1 ≤ ≤ n. At round 0, the father announces that at least one child is muddy and in the consecutive rounds he asks every child whether he knows whether he is muddy or not. Then, at round , all muddy children will raise their hands and at round + 1 all non-muddy children (if any) will raise their hands. Theorem 1 and its proof have been developed by [1,5]. Using a similar induction technique, we can establish the next theorem. Theorem 2. In the muddy children problem, suppose that there are n children and of them are muddy, 1 ≤ ≤ n. If the father announces that at least q children are muddy at round 0, then all muddy children will raise their hands at round − q + 1 and all non-muddy children (if any) will raise their hands at round − q + 2. Y. Izmirlioglu et al. Possible Worlds Semantics This section provides background information about possible world semantics. A Kripke structure represents the agents’ own beliefs and their beliefs about other agents using possible worlds. Properties of the world are represented by binaryvalued atomic propositions called fluents. A world is a complete interpretation of the fluents. Beliefs of the agents are encoded by accessibility relations between possible worlds. Let us now provide the formal definition of the Kripke structure and the semantics of belief formulae. A multi-agent domain AG, F includes a finite and non-empty set of agents AG and a finite set of fluents F encoding the properties of the world. Belief formulae over AG, F are defined by the BNF: ϕ ::= p | ¬ϕ | (ϕ ∧ ϕ) | (ϕ ∨ ϕ) | Bi ϕ | Eα ϕ | Cα ϕ where p ∈ F is a fluent, i ∈ AG and ∅ = α ⊆ AG. Bi is the belief operator and Bi ϕ stands for “agent i believes in formula ϕ”. Eα ϕ, Cα ϕ denote Group Belief formulae and their semantics are defined below; intuitively, Eα ϕ indicates that all agents in α believe ϕ, while Cα ϕ indicates that ϕ is common belief among α. We refer to a belief formula which does not contain any occurrences of Bi , Eα , Cα as a a fluent formula. Let LAG denote the set of belief formulae over AG, F. To exemplify, in the muddy children domain, the fluents F = {m1 , ..., mn } denote whether each child is muddy. The fluent formula m1 ∨ m2 ∨ ... ∨ mn states that at least one child is muddy, the belief formula ¬B2 m1 ∧ ¬B2 ¬m1 states that agent 2 does not know whether agent 1 is muddy or not. A Kripke structure M is a tuple S, π, B1 , . . . , Bn , where S is a set of worlds (denoted by M [S]), π : S → 2F is a function that associates an interpretation of F to each element of S (denoted by M [π]), and for i ∈ AG, Bi ⊆ S × S is a binary relation over S (denoted by M [i]). For convenience, we will often draw a Kripke structure M as a directed labeled graph, whose set of labeled nodes i represent S and whose set of labeled edges contains s − → t iff (s, t) ∈ Bi . The label of each node has two parts: the name of the world followed by the associated interpretation. For u ∈ S and a fluent formula ϕ, M [π](u) and M [π](u) (ϕ) denote the interpretation associated to u via π and the truth value of ϕ with respect to M [π](u). For a world u ∈ M [S], (M, u) is a pointed Kripke structure, also called state hereafter. Accessibility relations of an agent in the Kripke structure show uncertainity in his beliefs. That is, if the agent considers multiple worlds with different valuations, then his beliefs involve uncertainity. Satisfaction of belief formulae is defined over pointed Kripke structures [7]. Given a belief formula ϕ, a Kripke structure M = S, π, B1 , . . . , Bn and a possible state u ∈ S: (i) (M, u) p if p ∈ F and M [π](u) p; (ii) (M, u) ¬ϕ if (M, u) ϕ; (iii) (M, u) ϕ1 ∨ ϕ2 if (M, u) ϕ1 or (M, u) ϕ2 ; Muddy Children Problem (iv) (v) (vi) (vii) (M, u) ϕ1 ∧ ϕ2 if (M, u) ϕ1 and (M, u) ϕ2 ; (M, u) Bi ϕ if (M, t) ϕ for every t such that (u, t) ∈ Bi ; (M, u) Eα ϕ if (M, u) Bi ϕ for every i ∈ α; (M, u) Cα ϕ if (M, u) Ekα ϕ for every k ≥ 0, where E0α ϕ = ϕ and = Eα (Ekα ϕ). Ek+1 α Formal Definition of the Muddy Children Problem We describe the Muddy Children problem as AG, I, A where AG = {f, 1, .., n} is the set of agents, I is the set of children who are muddy and A = {announce_atleast_one, raise_handi, not_raise_handi} is the set of possible actions, i ∈ {1, .., n}. Here n is the number of children and l = |I| is the number of muddy children. The action announce_atleast_one denotes the announcement of the father that at least one child is muddy. raise_handi denotes the raising hand action of child i, and not_raise_handi action denotes child i not raising his hand. For the particular problem instance in the introduction, n = 3 and I = {1, 2, 3}. The game proceeds as follows: At round 0, the father executes the action announce_atleast_one. Then, at round j ≥ 1, every child i executes exactly one of raise_handi or not_raise_handi action, i ∈ {1, .., n}. The actions of children are simultaneous. We assume that all agents announce truthfully. At each round, all agents observe the actions of other agents and update their beliefs accordingly. The game ends at round k when all children raise their hand. Related Literature and Existing Solutions The existing solutions to the Muddy Children puzzle employ possible world semantics and epistemic reasoning methods. The state of the world is represented by a Kripke structure with an external view. Namely, the Kripke structure shows the unique actual world and the agents’ beliefs by accessibility relations to other possible worlds. The Kripke structure is updated by a state transition function upon an action. If the actions of children are simultaneous, the state transition function treats them as a single action and updates the Kripke structure once. We now provide the details of the solutions in the literature. 6.1 Eliminating Possible Worlds Let D = AG, I, A, F be the Muddy Children domain. We illustrate the solution of [5,10] with n = 3 children of which l = 2 of them are muddy, and the father announces that at least q = 1 child is muddy. But their method also works for other values of n, l, q. For this instance, AG = {f, 1, 2, 3}, I = {1, 2}, F = {m1 , m2 , m3 }, A = {atleast_one_muddy, know_muddyi, not_know_muddyi} for i ∈ {1, 2, 3}. The actions are modelled as epistemic actions, i.e., announcements of belief formulae. For example, the action know_muddyi announces the belief formula Bi mi ∨ Bi ¬mi . Y. Izmirlioglu et al. Formal definition of the transition function is as follows. Suppose that the initial state and the agents’ beliefs are represented by a pointed Kripke structure (M, s), where s is the actual world (i.e., the “real” state of affairs). Consider the occurrence of an action a which announces the belief formula γ to all agents, and an agent i observes the action occurrence. At the next state (M , s), the set of worlds, their valuations, and the actual world remains the same but agent i revises his accessibility relations such that (u, v) ∈ M [i] if (u, v) ∈ M [i] and (M, v) γ. The Kripke structure showing the actual world and the beliefs of the agents at the beginning of the problem is depicted in Fig. 1(a). There are 23 = 8 possible worlds encoding different combinations of m1 , m2 , m3 . To make the figure easy to read, each world is represented by its valuation of fluents—e.g., in the world 100, child 1 is muddy (i.e., m1 is true) but children 2 and 3 are not (i.e., m2 , m3 are false). In the actual world, only children 1 and 2 are muddy (denoted by a double circle in the figure). The accessibility relations of the agents show the worlds that they consider, and the uncertainty in their belief. In the actual world, child 1 considers both 110 and 010 possible since he cannot distinguish these two worlds based on his knowledge. Child 2 considers the worlds 110 and 100 possible. As another example, in the world 100, child 3 considers 100 and 101 possible. By the nature of the Kripke structure, the accessibility relations form the belief/knowledge of an agent about the beliefs of other agents (higher order beliefs). According to the semantics of entailment explained in Sect. 4, in the actual world 110, child 1 believes that child 2 believes that child 3 is not muddy, child 1 believes that child 2 does not know whether he is muddy or not, and child 1 believes that child 2 knows whether child 1 is muddy or not. In reality, child 1 does not know his own status but he knows that child 2 knows the status of child 1. Fig. 1. (a) The initial state (b) At the end of round 0 Muddy Children Problem The method of [5,10] uses elimination of accessibility relations to those worlds which do not satisfy the announced belief formula. After the father’s announcement, the children update their beliefs by removing their accessibility relations to the world 000 which does not satisfy the announced belief formula γ1 = m1 ∨ m2 ∨ m3 . Namely, since all children hear the announcement, they stop considering this world possible. The updated Kripke structure representing the agents’ beliefs at the end of round 0 is in Fig. 1(b). After round 0, in the actual world, none of the children knows whether they are muddy or not, i.e., (M, 110) entails the belief formulae ¬B1 m1 ∧ ¬B1 ¬m1 , ¬B2 m2 ∧ ¬B2 ¬m2 , ¬B3 m3 ∧ ¬B3 ¬m3 . Thus, at round 1, the children do not raise their hands. The children’s actions are simultaneous and all of them can observe each other’s actions. We can consider the three simultaneous actions as a single epistemic action announcing the belief formula γ2 = (¬B1 m1 ∧ ¬B1 ¬m1 ) ∧ (¬B2 m2 ∧ ¬B2 ¬m2 ) ∧ (¬B3 m3 ∧ ¬B3 ¬m3 ). Upon this action, the agents update their beliefs of Fig. 1(b), by removing their accessibility relations to the worlds which do not satisfy γ2 . Note that the world 100 does not satisfy child 1 not knowing his status, the world 010 does not satisfy child 2 not knowing his status and the world 001 does not satisfy child 3 not knowing his status. Hence agents remove their accessibility relations to the worlds 100, 010, 001. The new Kripke structure at the end of round 1 is depicted in Fig. 2(a). In the actual world of the updated structure, children 1 and 2 now know that they are muddy, while child 3 does not know whether he is muddy or not. In round 2, children 1 and 2 raise their hands but child 3 does not. Similar to round 1, we can consider this as a single epistemic action which announces the belief formula γ3 = (B1 m1 ∨ B1 ¬m1 ) ∧ (B2 m2 ∨ B2 ¬m2 ) ∧ (¬B3 m3 ∧ ¬B3 ¬m3 ). Since all children observe this action, each of them removes the edges to the worlds which do not satisfy γ3 . The worlds 011, 111, 101 in Fig. 2(a) do not satisfy γ3 , because in 011, 111 child 1 does not know whether he is muddy or not, and in 101 child 2 does not know whether he is muddy or not. Consequently the agents remove the edges to the worlds 011, 111, 101. The updated Kripke structure at the end of round 2 is shown in Fig. 2(b). Now child 3 also knows his status: he is not muddy. In fact, all children know the actual state of the world at the end of round 2. Therefore all children raise their hands at round 3 and the game ends. 6.2 Logic Programming Baral et al. [1] use Answer Set Programming (ASP) to solve the Muddy Children problem. They develop an ASP program, i.e. a set of logical rules, to encode the beliefs, actions and the state transition. The advantage of Answer Set Programming is that the state transition and the entailment of belief formulae can be computed by simple logical rules. In the ASP formulation, the possible worlds and the accessibility relations in the Kripke structure are represented by propositional atoms. The initial beliefs of the agents are given as an input to the ASP program and the initial state and the initial accessibility Y. Izmirlioglu et al. Fig. 2. (a) At the end of round 1 (b) At the end of round 2 relations are nondeterministically generated to satisfy the given beliefs. In their model, the father tells the children that at least one of them is muddy at step 0. This action is encoded as an epistemic action which announces a belief formula. In step 1, and odd-indexed steps, the father executes the ask action. In step 2, and in even-indexed steps, the children reply “Yes” or “No” simultaneously. Occurrence of an ask, Reply Yes, Reply No actions are represented by logical atoms occ(ask,T), occ(announce_k(A,true),T), occ(announce_k(A,false),T) atoms respectively, where A is the agent and T is the time step. The ask action does not change beliefs of agents hence the same Kripke structure carries over to the next step. However announcement actions alter agents’ beliefs and change the Kripke structure. Their state transition works as follows. At every step, the entailment of belief formulae at each world is computed by a set of ASP rules. After the father or children announce a belief formula, the worlds which do not satisfy the announced formula are identified and the accessibility relations of agents to these worlds are removed from the structure. Hence children commonly observe the effect of every action and update their beliefs. The game continues until all children answer Yes. The authors give an example with 3 children (child 1, 2 are muddy) and show that the ASP program yields an answer set in which all children respond Yes at step 6, as expected. They also prove a general proposition which states that if there are l muddy children, then the father must ask l questions before all children answer Yes. 6.3 Other Potential Solutions Alternative solutions to the Muddy Children may be developed in the future by using Game Theory, Mathematical Programming or other fields. One potential solution may be modeling it as an incomplete information game. Agents’ strategies depend on the history of actions and they update their beliefs accord- Muddy Children Problem ingly. Another solution to the puzzle can be Mathematical Programming. By constraint programming, we can impose constraints on the number of muddy children and possible configurations of the children. Constraint rules can eliminate some configurations based on the actions in the previous rounds. Dynamic programming can be used to memoize the possible configurations that agents consider at every round. Then the children’s actions are computed from their beliefs and some configurations can be eliminated at the next state. Variations of the Muddy Children Puzzle There are several variations of the muddy children puzzle with respect to the father’s announcement, the order of the children’s announcement, abilities of the children and the mistake factor. Father’s Announcement: In one variation, also discussed in [8], the father can make an announcement of the form: “Q of you have mud on their foreheads.”, where Q can be substituted by the quantifiers such as “At least q”, “At most q”, “Exactly q”, “An even number” etc. We assume that the father can see the foreheads of all children and always tells the truth. Order of Children: In the original formulation, at every round, the actions of the children are simultaneous. In another situation, the children can take actions in a sequential manner. This situation is equivalent to a single child taking an action at every round. The children can take action in a predetermined fixed order (i.e., a permutation of (1, ..., n)) or in a random order. When child i makes his announcement action, the other children update their beliefs and this process goes on with the next child in the sequence. We represent the order of children by O. If the actions of children are sequential, O is a permutation of (1, 2, ..., n); if their actions are simultaneous, O = ∅. Agent Abilities: We can also imagine an alternative scenario where some children lack a subset of sensing or action abilities. For example, some children cannot see the foreheads of other children and/or cannot observe when other children raise their hands. Moreover some of children may not be able to raise their own hands. Let W = (X, Y, Z) denotes the abilities of children, where X, Y , Z are the sets which include the indices of children who cannot see the forehead of others, the children who cannot observe the actions of other children, and the children who cannot raise their hands, respectively. Note that a child may lack multiple abilities (i.e., the three sets may not be disjoint). We assume that sensing or action abilities of children are common knowledge among all agents. The children who cannot see the foreheads of other children will consider the actions of other children to update their beliefs and reach the actual state. The children who cannot observe the actions of other children still know the number of rounds of the problem from the father’s announcements, and hence may infer the number of muddy children. Consequently, each child needs to take into account the abilities of others while reasoning and updating his beliefs. Y. Izmirlioglu et al. Rationazibility and Mistake: Another feasible case is that not all children are perfect reasoners. Some of them are boundedly rational and can sometimes make mistakes in their reasoning. These agents are not able to process all available information and therefore their beliefs might be different from a perfectly reasoning agent. Hence the announcement action of these children might be incorrect. If the identities of the boundedly rational children are common knowledge, this case can be handled simply by disregarding the actions of boundedly rational children. Another case is that the perfectly rational children commonly know there are exactly b (or at least b, at most b) number of boundedly rational children but do not know their identities. Then the children need to make more sophisticated reasoning to resolve this case. We denote the information about boundedly rational children by U. In the former case, U is a set which includes the index of boundedly rational children, in the latter case U = [b, b]. Considering all the above variations, we describe the general Muddy Children problem by D = AG, I, A, O, W, U. The set of actions A includes the father’s various announcement actions with different cardinality Q. Namely, A = {number_muddyQ, know_muddyi, not_know_muddyi }, where i ∈ {1, .., n} and Q is an identifier like “at least q”, “odd”, “prime”. The Active Muddy Child Problem: The Active Muddy Child [10] is another version of the Muddy Children problem in which a particular child, with index k, needs to find out whether he is muddy or not by asking questions. There are n children and of them are non-muddy. The father makes an announcement action at round 0, as before. The active child asks an individual child at each round whether he is muddy or not. The requested child answers the question truthfully and all agents listen to his response. The problem is to find the optimal strategy for the active child to achieve his goal in the smallest number of time steps. Note that a strategy is a conditional plan which specifies the index of the next child to ask depending on a history of children responses. Now we describe the current challenges in the Muddy Children problem and its variations, which need to be addressed for future research. Representation of the State: The initial state H of an epistemic problem is generally given as a set of the literals for the actual world and the agents’ beliefs (including their beliefs about other agents). For the muddy children instance in Sect. 2, the initial state is1 H = {m1 , m2 , m3 , C(¬B1 m1 ∧ ¬B1 ¬m1 ), C B1 m2 , C B1 m3 , ..., C B3 m1 , C B3 m2 , C(¬B3 m3 ∧ ¬B3 ¬m3 )}. However, in epistemic reasoning, state transition functions and entailment of belief formulae are defined over pointed Kripke structures. Thus, we need to determine the Kripke structure(s) which corresponds to the initial state of the epistemic problem; unfortunately, in general, the resulting Kripke structure may not be unique [13]. This is indeed the issue in epistemic reasoning and planning 1 When we omit the set of agents in the formulae Cα , we assume α = AG. Muddy Children Problem methods. The state should be represented as a set of belief formulae B, but it is represented by a Kripke structure (Mt , st ) where t is the time point and st is the actual world. Then the state transition function Φ applies to the current Kripke structure to obtain the next structure i.e. (Mt+1 , st+1 ) = Φ((Mt , st ), a). Some authors [9,11] have developed transition functions which operate on the set of beliefs. The belief set is revised in order to incorporate the incoming belief formulae. However [9] allows only propositional common knowledge, and the method of [11] requires prespecification of agents’ beliefs at the next time step for each possible belief formula in the current time step, in the action description. Ideally, agents’ beliefs at the next time step should arise endogenously as an outcome of the model, instead of being given as an input. Individual View: The existing solutions look at the problem from an external view. However, in reality, agents observe the world from their own private individual perspective. Each child has his own Kripke structure representing his beliefs and does not know the actual world. An example of an individual Kripke structure of child 2 is shown in Fig. 3(a). The actual world is 100 but child 2 considers two worlds 100 and 110 possible. Fig. 3. (a) A private view (b) A contingency As in external view, removing accessibility relations to the worlds which do not satisfy announced formula also works for the individual view of the Muddy Children problem. However this edge removal method is found to be problematic for other epistemic problems which involve multiple possible worlds [3,4]. The reason is that after removing edges, an agent might end up considering no world possible. As an alternative approach, researchers apply the action sequence to each possible world in the initial structure as a contingency [3,4,6]. The intuition is that the agent considers each of those worlds as if it is the actual world Y. Izmirlioglu et al. and examines the outcome upon a sequence of actions. Then the state transition function yields branching on contingencies. But this method might produce counter-intuitive results for the Muddy Children problem as in the following example: In the Kripke structure in Fig. 3(b), child 2 considers as if world 110 is the actual world. He updates the structure upon every action using the same edge removal method and obtains its final form in Fig. 4. However this Kripke structure is not realistic for the individual view of child 2: He believes that the actual world is 110 but he believes in another world 100! Thus how to solve the Muddy Children problem using a distributed setting and how to make the state transition for an individual agent is a challenge for future research. Fig. 4. The outcome if the actual world is 110 Variations: Whether variations of Muddy Children can be solved by the existing methods or other potential methods (discussed in Sect. 6.3) is an open problem. The cardinality of the muddy children in the father’s announcement can be represented by possible worlds in the Kripke structure or Constraint Programming. The state transition function in the Kripke structure or Dynamic programming can be modified to incorporate agents’ abilities. If the number (or range) of the boundedly rational children is known, this case can be handled by considering all possible candidate subsets of children as boundedly rational. Then a perfectly rational agent will revise the possible worlds he considers, by pooling those candidate subsets of boundedly rational agents. Implementing these methods is a direction for future research. The Muddy Children is a famous problem in epistemic reasoning literature but is not widely known in other fields of Artificial Intelligence, Computer Science, Game Theory. The challenge of this problem is that it is a repeated game and requires sophisticated reasoning about other agents’ beliefs at every round. Besides, the agents cannot directly reveal their knowledge to other agents but they need to infer other agents’ knowledge from their actions. This paper have introduced the Muddy Children puzzle and variations to the general AI and Computer Science community. We have provided some theorems about the outcome for some variations of the problem. We have illustrated the existing solutions of the puzzle which use epistemic reasoning methods and stressed that they are not scalable and cannot solve all variations. In our opinion, the Muddy Children problem may be related to other problems in AI and Game Theory, and may stimulate further research ideas and solution methodologies in other fields. Muddy Children Problem References 1. Baral, C., Gelfond, G., Son, T.C., Pontelli, E.: Using answer set programming to model multi-agent scenarios involving agents’ knowledge about other’s knowledge. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 259–266 (2010) 2. Barwise, J.: Scenes and other situations. J. Philos. 78(7), 369–397 (1981) 3. Bolander, T., Andersen, M.: Epistemic planning for single and multi-agent systems. J. Appl. Non-Classical Logics 21(1) (2011) 4. Bolander, T.: A gentle introduction to epistemic planning: the del approach. arXiv preprint arXiv:1703.02192 (2017) 5. Van Ditmarsch, H., van der Hoek, W., Kooi, B.: Dynamic Epistemic Logic, 1st edn. Springer, Heidelberg (2007) 6. Engesser, T., Bolander, T., Mattmüller, R., Nebel, B.: Cooperative epistemic multiagent planning for implicit coordination. In: Ghosh, S., Ramanujam, R. (eds.) Proceedings of the Ninth Workshop on Methods for Modalities, M4M@ICLA 2017, Indian Institute of Technology, Kanpur, India, 8th to 10th January 2017. EPTCS, vol. 243, pp. 75–90 (2017) 7. Fagin, R., Halpern, J., Moses, Y., Vardi, M.: Reasoning About Knowledge. MIT Press, Cambridge (1995) 8. Gierasimczuk, N., Szymanik, J.: A note on a generalization of the muddy children puzzle. In: Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge, pp. 257–264 (2011) 9. Huang, X., Fang, B., Wan, H., Liu, Y.: A general multi-agent epistemic planner based on higher-order belief change. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI 2017) (2017) 10. Kominis, F., Geffner, H.: Beliefs in multiagent planning: from one agent to many. In: Proceedings of the Twenty-Fifth International Conference on Automated Planning and Scheduling, ICAPS 2015, Jerusalem, Israel, 7–11 June 2015, pp. 147–155 (2015) 11. Liu, Q., Liu, Y.: Multi-agent epistemic planning with common knowledge. In: Lang, J. (ed.) Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, 13–19 July 2018, pp. 1912–1920. ijcai.org (2018) 12. McCarthy, J.: Formalization of two puzzles involving knowledge. Formalizing Common Sense: Papers by John McCarthy, pp. 158–166 (1990) 13. Son, T.C., Pontelli, E., Baral, C., Gelfond, G.: Finitary S5-theories. In: Fermé, E., Leite, J. (eds.) JELIA 2014. LNCS (LNAI), vol. 8761, pp. 239–252. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11558-0_17 14. van Tilburg, G.: Doe wel en zie niet om (do well and don’t look back). Katholieke Illustratie (Catholic Illustrated Journal) 90(32), 47 (1956) Multi-agent Cooperative Argumentation in Arg2P Giuseppe Pisano1(B) , Roberta Calegari2 , and Andrea Omicini2 1 Alma AI – Alma Mater Research Institute for Human-Centered Artificial Intelligence, Alma Mater Studiorum, Università di Bologna, Bologna, Italy [emailprotected] Dipartimento di Informatica – Scienza e Ingegneria (DISI), Alma Mater Studiorum, Università di Bologna, Bologna, Italy {roberta.calegari,andrea.omicini}@unibo.it, http://giuseppepisano.apice.unibo.it, http://robertacalegari.apice.unibo.it, Abstract. This work focuses on cooperative argumentation and conversation in multi-agent systems by introducing an extension of the Arg2P technology that enables parallelisation and distribution of the argumentation process. The computational model and the implementation underpinning the Arg2P technology are presented and discussed. Keywords: Argumentation · Arg2P · Cooperative argumentation Multi-agent systems · Cooperative reasoning Human-centred intelligent systems are densely populated by agents (either software or human) capable of understanding, arguing about, and reporting, via factual assertions and arguments, what is happening and what they could make happen [19]. A multi-agent system (MAS) based on argumentation, dialogue, and conversation can then work as the basis for designing human-centred intelligent systems: through argumentation, dialogue, and adherence to social justice, the behaviour of the intelligent system can be reached, shaped, and controlled [1,25], and conflict can be resolved by adopting a cooperative argumentation approach [10]. There, the purpose of multi-agent argumentative dialogues is to let agents reach an agreement on (i) the evaluation of goals and corresponding actions (or plans), and (ii) the adoption of a decentralised strategy for reaching a goal, by allowing agents to refine or revise other agents’ goals and defend one’s proposals. In this scenario, intelligent behaviours are likely to become associated with the capability of arguing about situations as well as the current state and circumstances, by reaching a consensus on what is happening around and what is c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 140–153, 2023. https://doi.org/10.1007/ Multi-agent Cooperative Argumentation in Arg2P needed, and by triggering and orchestrating proper decentralised semantic conversations so as to determine how to collectively act in order to reach a future desirable state [8]. Thus, argumentation [14] and related technologies become a fundamental building block for the design of these systems, thanks to their potential to be an effective communication medium for heterogeneous intelligent agents while enabling a natural form of interaction between users and computational systems, towards explainability features. However, for argumentation tools to be able to meet the aforementioned expectations, a huge effort is required from a software engineering perspective. The last decades’ continuous improvement in the design and development of technologies for human-centred intelligent systems has not been matched by an analogous improvement of argumentation technologies, where the technological landscape is nowadays populated by very few systems—and most of them are mere prototypes [6]. A key problem in existing argumentation technology is that a widely-acknowledged well-founded computational model for argumentation is currently missing: this makes it difficult to investigate convergence and scalability of argumentation techniques in highly-distributed environments [10,18]. At the same time, the field has seen a constant flow of theoretical contributions [17,20]. Arg2P [9] is a logic-based technology, offering a thorough instantiation of the ASPIC+ framework [21] for structured argumentation. The purpose of this paper is to effectively distribute the argumentation process (evaluation of arguments) so as to enable the exploitation of Arg2P in the context of cooperative argumentation, according to the aforementioned perspective. Accordingly, the work is structured as follows. Section 2 contains a brief introduction to structured argumentation. Section 3 presents the core contribution of this work, i.e., the distribution of the argumentation process and its implementation. Finally, Sect. 4 concludes the work. Background Notion: Structured Argumentation Let us start by defining a generic structured argumentation framework. This introduction has two purposes: (i) to give the reader with no specific knowledge in the formal argumentation field an idea of its main concepts and notions, (ii) to serve as a basis for the analysis contained in subsequent sections. For a more complete introduction we invite the reader to consult the vast amount of available literature on the topic [3,4]. We first introduce the notion of argumentation language. In the argumentation language, a literal is either an atomic proposition or its negation. Notation 1. For any literal φ, its complement is denoted as φ. That is, if φ is a proposition p, then φ = ¬p, while if φ is ¬p, then φ is p. Literals are brought into relation through rules. Definition 1 (Rules). A defeasible rule r has the form: ρ : φ1 , ..., φn ⇒ ψ with 0 ≤ n, and where G. Pisano et al. – ρ is the unique identifier for r; – each φ1 , . . . φn , ψ is a literal; – the set {φ1 , . . . φn } is denoted by Antecedent(r) and ψ by Consequent(r). Defeasible rules – denoted by DefRules – are rules that can be defeated by contrary evidence. Pragmatically, a defeasible rule is used to represent defeasible knowledge, i.e., tentative information that may be used if nothing could be posed against it. For the sake of simplicity, we define non-axiom premises via defeasible rules with empty Antecedent. A theory consists of a set of rules. Definition 2 (Theory). A defeasible theory is a set Rules ⊆ DefRules. Arguments are built from defeasible rules. Given a defeasible theory, arguments can be constructed by chaining rules from the theory, as specified in the definition below—cf. [21]. Definition 3 (Argument). An argument A constructed from a defeasible theory Rules is a finite construct of the form: A : A1 , . . . An ⇒r φ with 0 ≤ n, where – r is the top rule of A, denoted by TopRule(A); – A is the argument’s unique identifier; – Sub(A) denotes the entire set of subarguments of A, i.e., Sub(A) = Sub(A1 ) ∪ . . . ∪ Sub(An ) ∪ {A}; – φ is the conclusion of the argument, denoted by Conc(A); Arguments can be in conflict, accordingly to two kinds of attack: rebuts and undercutting, here defined as in [21]. Definition 4 (Attack). An argument A attacks an argument B (i.e., A is an attacker of B) at B ∈ Sub(B) iff A undercuts or rebuts B (at B ), where: – A undercuts B (at B ) iff Conc(A) = TopRule(B ) – A rebuts B (at B ) iff Conc(A) = φ and Conc(B ) = φ Then, an abstract argumentation framework can be defined by exploiting arguments and attacks. Definition 5 (Argumentation Framework). An argumentation framework constructed from a defeasible theory T is a tuple A, , where A is the set of all arguments constructed from T , and is the attack relation over A. The corresponding argumentation graph is a directed graph whose arcs are attacks and nodes are arguments. Notation 2. Given an argumentation framework G = A, , we write AG and G to denote the framework’s arguments and attacks, respectively. Given an argumentation framework, we leverage on labelling semantics [2,14] to compute the sets of arguments that are accepted or rejected. Accordingly, each argument is associated with one label which is either IN, OUT, or UND—respectively meaning that the argument is either accepted, rejected, or undecided. Given a labelling for a framework, a IN, OUT, UND labelling for the statements claimed by the arguments in the graph can be also derived. Multi-agent Cooperative Argumentation in Arg2P Distributed Argumentation in Arg2P Arg2P is a logic-based technology, an easily-deployable argumentation tool built to meet the requirements of intelligent software systems.1 It is built upon 2PKt—a reboot of the tuProlog [11,13] project offering a general, extensible, and interoperable ecosystem for logic programming and symbolic AI. Whereas a complete overview of the features of this specific implementation is out of the scope of this paper, we refer the reader to [7,9,24] for more details. In this section we focus on how to effectively distribute its argumentation process (evaluation of arguments). A first version of a message-based distributed argumentation algorithm is here discussed as the basic pillar of a computational model for cooperative argumentation in MAS. We ignore issues such as agent autonomy and MAS coordination artefacts [22,23], and focus instead on the distribution issues of cooperative argumentation, which enables agent dialogue and defeasible reasoning in MAS. The first issue when facing computational issues of cooperative argumentation is the parallelisation of the argumentation process. Parallelisation needs to be tackled under two distinct perspectives: (i) the algorithmic perspective and (ii) the data perspective. Under the algorithmic perspective, we divide the argument evaluation (w.r.t. a given semantics) into smaller sub-tasks to be executed in parallel. Under the data perspective, instead, we split the data used by the algorithm—i.e., the argumentation defeasible theory. Action here is therefore at the data level, looking for possible data partitioning on which the argumentation process can be run in parallel. As a premise, we first introduce the algorithm that served as a starting point in the parallelisation of the argumentation process. Among the available libraries, Arg2P includes a query-based mode, which allows for single-query evaluation according to the selected semantics.2 The feature is accessible in the default instance of the Arg2P framework through the predicate answerQuery(+Goal, -Yes, -No, -Und) which requests the evaluation of the given Goal, and gets the set of facts matching the goal distributed in the three sets IN, OUT, and UND as a result. The algorithm used to evaluate a single claim (or query) according to grounded semantics is inspired by the DeLP dialectical trees evaluation [15]. Listing 1.1 shows the pseudo-code – AnswerQuery(Goal) – for the answerQuery/4 predicate: given a claim (Goal) as input, the function first builds all the arguments sustaining that claim (buildSustainingArguments(Goal)), and then requires their evaluation via the Evaluate(A, Chain) function. In order to assess the A1 , ..., An status (acceptability or rejection), three conditions are evaluated: (Cond1) if a conflicting argument labelled as IN exists, then A1 is OUT; (Cond2) if a cycle in the route from the root to the leaves (Chain) exists, then A1 argument is UND; 1 2 http://arg2p.apice.unibo.it. At the time of writing, only grounded semantics is fully implemented. G. Pisano et al. Listing 1.1. Structured argumentation, Arg2P answer query algorithm for grounded semantic (pseudo-code). AnswerQuery ( Goal ): A1 , ..., An = b u i l d S u s t a i n i n g A r g u m e n t s ( Goal ) Res = ∅ for A in A1 , ..., An : Res = Res ∪ Evaluate (A , ∅) return Res . Evaluate (A , Chain ): if (∃ B ∈ Attacker ( A ): Evaluate (B , A ∪ Chain ) = IN ) return OUT if (∃ B ∈ Attacker ( A ): B ∈ Chain ) return UND if (∃ B ∈ Attacker ( A ): Evaluate (B , A ∪ Chain ) = UND ) return UND return IN . (Cond3) if a conflicting argument labelled as ment is UND. exists, then also the A1 argu- If none of the above conditions is met, then the argument can be accepted. Example 1. Let us consider the following theory and the corresponding arguments (depicted in Fig. 1). r1 : ⇒ a A0 : ⇒r1 a r2 : a ⇒ b A1 : A0 ⇒r2 b r3 : ⇒ ¬b A2 : ⇒r3 ¬b r4 : b ⇒ c A3 : A1 ⇒r4 c According to grounded semantic A0 is IN – there are no arguments contending its claim or undercutting its inferences – whereas A1, A2 and A3 are UND—A1 and A2 have opposite conclusions and thus attack each other; the conflict is then propagated to the derived argument A3. Let us suppose we require the evaluation of claim b via the AnswerQuery(Goal) function in Listing 1.1. First, the arguments sustaining b are created, in this case only A1. Then the evaluation conditions on A1 attackers – only A2 in this case – are assessed. However, A2 admissibility depends, in turn, on A1—as you can see in Fig. 1 also A1 attacks A2. There is a cycle in the graph (Cond2), and no other attackers matching (Cond1). As a consequence, A2 is UND and thus A1 (Cond3). Accordingly, claim b is labelled UND as expected. Let us now consider the algorithm in Listing 1.1 to analyse the requirements and implications of its parallelisation. The algorithm structure is simple: the argument evaluation leverages the evaluation obtained from its attackers—i.e., the attackers are recursively evaluated using the same algorithm and the result is exploited to determine the state of the target argument. Intuitively, a first point Multi-agent Cooperative Argumentation in Arg2P Fig. 1. Argumentation graph for arguments from Example 1, in which nodes are arguments and edges are attacks between arguments. of parallelisation can be found in the search and evaluation of the Attackers. Indeed, every condition exploited by the algorithm – (Cond1), (Cond2), and (Cond3) – to evaluate an argument requires one and only one attacker to match the constraint. Those conditions directly suggest a parallelisation in the search and evaluation of the attackers. We could evaluate the arguments simultaneously under different branches, and the success in one of the branches would lead to the success of the entire search. However, the algorithm exposes another point of parallalisation. The order in the evaluation of the conditions is essential for the soundness of the algorithm— as illustrated by the following example. Example 2. Let us consider argument A and its two attackers B and C. Let it be the case in which we know B and C’s labelling, IN for the former and UND for the latter. If we do not respect the order dictated by the algorithm, A’s labelling is either UND (Cond3) or OUT (Cond1). Of course, the first result would be in contrast with the original grounded semantic requirements for which every argument having an IN attacker should be definitively OUT. Conversely, if we respect the evaluation order, A’s labelling would be OUT in every scenario. Although the evaluation order is strict, we can evaluate all the conditions simultaneously and consider the ordering only while providing the labelling for the target argument. In other words, the three conditions are evaluated in parallel, but the result is given accordingly to the defined priorities. If (Cond1) is met, the argument is labelled as OUT. Conversely, even if (Cond2) or (Cond3) are met, one should first verify that (Cond1) does not hold. Only then the argument can be labelled as UND. Listing 1.2 contains the version of the algorithm taking into account both points of parallelisation. The three conditions – (Cond1), (Cond2) and (Cond3) – are evaluated at the same time. Then the results of the three sub-tasks are combined to provide the final solution according to the conditions’ priority. Of course, if we consider a scenario where only the first condition (Cond1) is required to determine the status of the argument in input, the parallel evaluation of all three conditions would lead to a waste of computational resources. However, this problem is easily mitigated by evaluating the sub-task results as soon as they are individually available—i.e. in the case we receive a positive result from a G. Pisano et al. Listing 1.2. Evaluate predicate with both parallel conditions evaluation and parallel attackers Evaluate (A , Chain ): PARALLEL { Cond1 = PARALLEL { ∃ B ∈ Attacker ( A ): Evaluate (B , A ∪ Chain ) = IN } Cond2 = PARALLEL { ∃ B ∈ Attacker ( A ): B ∈ Chain } Cond3 = PARALLEL { ∃ B ∈ Attacker ( A ): Evaluate (B , A ∪ Chain ) = UND } } if ( Cond1 ) return OUT if ( Cond2 AND NOT Cond1 ) return UND if ( Cond3 AND NOT Cond1 ) return UND if ( NOT Cond1 AND NOT Cond2 AND NOT Cond3 ) return IN single sub-task, and it is enough to compute the argument status, we can cut the superfluous computational branches and return the final solution. In the first part of our analysis we focused on the parallelisation problem from a pure computational perspective, by discussing whether the evaluation task could be split into a group of sub-task to be executed simultaneously. However, there is another perspective to take into account when parallelising: the one concerning the data. Example 3. For instance, let us consider a job computing the sum and the product of a set of numbers. Using the sub-task approach, we could have two subroutines running in parallel, one computing the sum and the other computing the product of the numbers. However, leveraging the associative property of addition and multiplication, we can split the problem into a series of tasks computing both sum and product on a subset of the original data. Then the final result would be the sum and the multiplication of the tasks’ results. Let us suppose to apply the same principle to the argumentation task. We build arguments from a base theory according to the relations illustrated in Sect. 2. The logic theory is, for all intents, the input data of our algorithm (argumentation task). Now, the question is whether we can effectively split the data into sub-portions to be evaluated in parallel without affecting the global soundness of the original algorithm. Let us consider a splitting principle based on rules dependency – i.e., if two rules can be chained, they must stay together –, and the algorithm in Listing 1.2. According to the algorithm, the search and evaluation of the attackers are performed in a distinct subtask (concurrent evaluation). Then, we can split the knowledge concerning attacked and attackers into separate sets, since the subtasks evaluating an attacker require only the knowledge to infer such an attacker—i.e., the Dependency principle must be respected. Indeed, there is no task that needs to know how to build both an argument and its attackers, since the search is delegated to another process. In Multi-agent Cooperative Argumentation in Arg2P other words, a single subprocess in charge of evaluating an argument needs only the portion of the theory needed to infer the argument itself—i.e., the chainable rules concluding the target claim. Computational Model: The Master-Slave Actor Model We can now provide a complete and sound mechanism for the admissibility task in a fully-concurrent way, exploiting the insights from Sect. 3 and applying them to an actor-based model [16]. In short, the actor model is based on a set of computational entities – the actors – communicating with each other through messages. The interaction between actors is the key to computation. Actors are pure reactive entities that, only in response to a message, can: – create new actors; – send messages to other actors; – change their internal state through a predefined behaviour. Actors work in a fully-concurrent way – asynchronous communication and message passing are fundamental to this end – making the actor model suited to concurrent applications and scenarios. We choose this model for its simplicity: it presents very few abstractions making it easy to study both how to model a concurrent system and its properties. The final goal is to provide a sound model for agents’ cooperative argumentation in MAS, enabling concurrent evaluation of the argumentation algorithms (focusing on distribution). The actor paradigm is a straightforward choice for an analysis of this sort. Since the actor model focuses on actors and their communication, the following design will review the structure and behaviour of the actors involved. Although a fully-distributed version of the model is possible, we choose to adopt a master-slave approach in order to simplify the functioning of the system as much as possible. Accordingly, two main sorts of actors are conceived in the system: master and worker. Master actors coordinate the knowledge-base distribution phase, while the workers hold a portion of the theory, concurring with the evaluation of a claim through their interaction. Let us start with the knowledge distribution. Since actors are reactive entities, in order to completely adhere to the actor model the master knowledge base can be changed from outside the actor system. If the master receives the order to add a new element to the theory, three possible scenarios can be configured: 1. none of the workers contains a compatible knowledge base (kb) – i.e., it is not possible to chain the new rule to the knowledge base – and consequently, the master creates a new worker containing the portion of the theory; 2. one or more workers have a compatible knowledge base, and they add the element to their kb; 3. a set of workers possess overlapping knowledge bases – i.e. the union set of workers’ knowledge bases can be used to create a unique inference chain –, and, as a consequence, we merge their knowledge bases and destroy the extra workers; G. Pisano et al. Iterating this procedure for all the elements of an input knowledge base, as a result, we should obtain a set of workers each of them containing a portion of the theory in accordance with the dependency splitting principle. Once the knowledge has been correctly split between workers, we can proceed with the actor-based evaluation of an argument. Each actor is responsible for evaluating those arguments that can be built using his portion of the theory. When the actor receives an evaluation request, it first checks if attackers exist, w.r.t. its portion of the knowledge base. Then, the actor can: (i) register the impossibility to evaluate the argument – only if a cycle through the evaluation chain is detected –, (ii) require the attacker arguments evaluation to all the other actors. In the latter case, the actor shall answer the original evaluation request only after receiving a response from others actors. The conditions to match while evaluating an argument are the same as the original algorithm in Listing 1.1: – if one counterargument is admissible, we evaluate the argument as OUT; – if any number of actors decide for the argument undecidability with none advancing its rejection, we mark the argument as UND; – if all the actors agree that no counterarguments can be provided as acceptable, we evaluate the argument as IN; Actors provide their suggestions on the state of the requested argument according to all the labels of their counterarguments. We can describe the interactions between the system’s actors as a sequence diagram (Fig. 2) of messages exchanged between masters and workers, where: – Add, sent from the master to a worker, through which the master sends the new theory member to be stored in the workers’ kb; the decision on which is the right worker to send the data to is the responsibility of the master that knows the entire state of the system and how data has been divided; – RequireEvaluation, sent from outside the system to the master to require the evaluation of a claim; – Eval, sent from the master to all workers to require the evaluation of a claim – FindAttacker, sent from a worker to master to require the broadcasting of a request for counterarguments to all the available workers; – ExpectedResponses, sent from master to a worker to communicate the number of expected responses to a request for counterarguments; – AttackerResponse, sent from a worker to a worker in response to a request for counterarguments; the message contains the state of the counterargument obtained through a new FindAttacker evaluation; – EvalResponse, sent from workers to the master to communicate their decision on a claim; the decision is taken after all the AttackerResponse containing the state of possible counterarguments have been received; – EvaluationResponse, message sent from master containing the system decision on the state of a claim. Note that the Add and RequireEvaluation messages come from outside the actor system and start the distribution and evaluation process. This interaction Multi-agent Cooperative Argumentation in Arg2P Fig. 2. Master-slave interaction for argument evaluation. model implements both the parallelisation strategies described in Listing 1.2: the search for counterarguments is executed concurrently by all the worker nodes, as also the evaluation of the admissibility of arguments. Example 4. Let us consider again the theory in Example 1. Let us assume a single MasterActor and the following order in the inclusion of the rules in the system: r1, r3, r4, r2.3 As for the first three rules, the behaviour is the same. Since the rules are not chainable, it creates three distinct workers and sends a single rule to every one of them via the Add message. We now have Worker 1, Worker 2, and Worker 3 with respectively r1, r3, and r4 in their knowledge bases. Then the inclusion of rule r2 is required, and both workers 1 and 3 results in having a chainable knowledge base. Rule r2 is, in fact, the missing link in the inference chain of r1 and r4. As a consequence, the Master stops the two workers, creates a new one, and then requires to it the inclusion of rules r1, r4, r2 via three Add messages. At the end of the distribution phase, we have two workers, one containing r1, r2, r4, and the other just r3. The dependency principle is thus respected. Going on with the example, we require the evaluation of claim b via the RequireEvaluation message: so, the Master sends an Eval 3 The order of inclusion affects the steps required to converge, not the final state of the system. G. Pisano et al. message to all the actors. Worker 1 succeeds in building an argument (A1) and sends to all the other Workers – also Worker 1 is included in the list – a FindAttacker message requiring attackers evaluation—the broadcasting of the message is done by the Master actor. The master also communicates the number of responses that are expected (ExpectedResponses message)—only two in that case. Worker 1 answers with a AttackerResponse communicating that there are no attacking arguments according to its knowledge, while Worker 2 sends back a AttackerResponse with an Und result. Indeed, Worker 2 is able to create a valid counterargument (A2), but a cycle is detected in the inference chain. According to the evaluation algorithm, receiving an Und response, Worker 1 can finally label A1 as UND and let the master know that via a EvalResponse message. 3.2 Implementation: The Parallel Library The model in Subsect. 3.1 has been implemented as a library – the Parallel library – for the Arg2P framework.4 The goal of the implementation is twofold: (i) providing a mechanism for the concurrent evaluation of a claim by a single Arg2P instance – actors in execution on a single machine can achieve real parallelisation thanks to multicore hardware architectures – (ii) enabling cooperative argumentation by allowing different Arg2P instances to create a single actor system, thus sharing their knowledge base or their hardware resources. Among the available technologies for the implementation, we selected Akka.5 [12] Akka is an open source middleware for programming concurrent and distributed actor systems based on the original Actor model by Hewitt [16]. Built upon the JVM platform, the framework offers an easy way of deploying network distributed systems observant of the original actor principles—e.g. reactivity, asynchronous communications, and absence of states of shared memory between actors. All these features made the Akka framework one of the reference technologies in the distributed landscape. The final implementation makes use of the Akka Clustering features to enable the collaboration of different Arg2P instances. In particular, we rely on Cluster Singletons 6 to handle the Master actor lifecycle, and Cluster Sharding 7 for Worker nodes. The Parallel library makes available five directives: – join(Port), requesting the creation of an actor system on the local machine exposed on port Port; – join(Port, Address), to join an actor system on the machine at the given Address, exposed on port Port; – load, requesting the distribution of the rules contained in the knowledge base of the local instance between all the members of the actor systems; – reset, requesting the deletion of the data previously distributed in the actor system via the load directive; 4 5 6 7 Sources available at https://github.com/tuProlog/arg2p-kt. https://akka.io/. https://doc.akka.io/docs/akka/current/typed/cluster-singleton.html. https://doc.akka.io/docs/akka/current/typed/ Multi-agent Cooperative Argumentation in Arg2P – solve(Goal, In, Out, Und), requesting the evaluation of the Goal claim to the actor system according to the procedure in Fig. 2. Results are the set of facts matching the goal distributed in the three sets IN, OUT, and UND. All the application scenarios can be modelled by using the directives above. We achieve a parallel evaluation of a claim on a single Arg2P instance in three steps: (i) creating a local actor system (join(Port)), (ii) distributing the theory between local actors (load), (iii) requiring the evaluation of a statement through the solve(Goal, In, Out, Und) directive. At the same time we could have others Arg2P instances offering their hardware resources (join(Port, Address)) or also participating in the resolution if they share their own knowledge (load). In this work, given the relevance of issues such as pervasiveness and interconnection in the current technological landscape, we address the problem of distribution of the argumentation workload. We follow some insights from [5] and [22,23]. In [5] the first proposal of a tuProlog-based is presented that exploits a dialogical argumentation mechanism—i.e., argumentation is performed across multiple processes proposing arguments and counterarguments. However, the argumentation algorithm distribution has not been addressed there. Conversely, in [22,23] authors directly address the problem of enabling argumentation techniques in MAS. Nonetheless, their approach just depicts a general-purpose architectural solution for the multi-party argumentation problem in the MAS context, providing for neither an actual technology nor a precise model for the distribution and parallelisation of the argumentation process. Overall, we believe that our approach is a step forward in the direction of a full argumentation-based MAS, and more in general of the diffusion of argumentation theories as a solid foundation for the engineering of complex intelligent systems. Yet, many issues are still to be considered. We should provide a complete analysis of the computational properties of the presented model – e.g., correctness, completeness, termination –, and also consider its relation with alternative distribution schemes (e.g., peer-to-peer). Moreover, an empirical evaluation of the performance of the system compared to traditional solvers should also be provided. Another topic of future investigations is the extension to different argumentation semantics. The main difference would be in the labelling conditions used to classify the arguments according to the different semantics. Moreover, a branching mechanism to allow the coexistence of multiple labellings should be devised in order to support the semantics with multiple extensions. However, most of the ideas behind the presented model should still remain applicable. Acknowledgements. This work was supported by the H2020 ERC Project “CompuLaw” (G.A. 833647). G. Pisano et al. References 1. Andrighetto, G., Governatori, G., Noriega, P., van der Torre, L.W.: Normative multi-agent systems, Dagstuhl Follow-Ups, vol. 4. Schloss Dagstuhl-LeibnizZentrum fuer Informatik (2013). http://www.dagstuhl.de/dagpub/978-3-93989751-4 2. Baroni, P., Caminada, M., Giacomin, M.: An introduction to argumentation semantics. Knowl. Eng. Rev. 26(4), 365–410 (2011). https://doi.org/10.1017/ S0269888911000166 3. Baroni, P., Gabbay, D., Giacomin, M., van der Torre, L.: Handbook of Formal Argumentation. College Publications, London (2018). https://www. collegepublications.co.uk/handbooks/? 00003 4. Besnard, P., et al.: Introduction to structured argumentation. Argument Comput. 5(1), 1–4 (2014). https://doi.org/10.1080/19462166.2013.869764 5. Bryant, D., Krause, P.J., Vreeswijk, G.: Argue tuProlog: a lightweight argumentation engine for agent applications. In: Computational Models of Argument. Frontiers in Artificial Intelligence and Applications, vol. 144, pp. 27–32. IOS Press (2006). https://ebooks.iospress.nl/publication/2929 6. Calegari, R., Contissa, G., Lagioia, F., Omicini, A., Sartor, G.: Defeasible systems in legal reasoning: a comparative assessment. In: Araszkiewicz, M., RodríguezDoncel, V. (eds.) Legal Knowledge and Information Systems, JURIX 2019: The Thirty-second Annual Conference, Frontiers in Artificial Intelligence and Applications, vol. 322, pp. 169–174. IOS Press (2019). https://doi.org/10.3233/ FAIA190320 7. Calegari, R., Contissa, G., Pisano, G., Sartor, G., Sartor, G.: Arg-tuProlog: a modular logic argumentation tool for PIL. In: Villata, S., Harašta, J., Křemen, P. (eds.) Legal Knowledge and Information Systems, JURIX 2020: The Thirty-third Annual Conference. Frontiers in Artificial Intelligence and Applications, vol. 334, pp. 265–268 (2020). https://doi.org/10.3233/FAIA200880 8. Calegari, R., Omicini, A., Sartor, G.: Computable law as argumentation-based MAS. In: Calegari, R., Ciatto, G., Denti, E., Omicini, A., Sartor, G. (eds.) WOA 2020–21st Workshop “From Objects to Agents”. CEUR Workshop Proceedings, vol. 2706, pp. 54–68. Sun SITE Central Europe, RWTH Aachen University, Aachen, Germany (2020). http:// ceur-ws.org/Vol-2706/paper10.pdf, 21st Workshop “From Objects to Agents” (WOA 2020), Bologna, Italy, 14–16 September 2020. Proceedings 9. Calegari, R., Pisano, G., Omicini, A., Sartor, G.: Arg2P: an argumentation framework for explainable intelligent systems. J. Logic Comput. 32(2), 369–401 (2022). https://doi.org/10.1093/logcom/exab089, Special Issue from the 35th Italian Conference on Computational Logic (CILC 2020) 10. Carrera, Á., Iglesias, C.A.: A systematic review of argumentation techniques for multi-agent systems research. Artif. Intell. Rev. 44(4), 509–535 (2015). https:// doi.org/10.1007/s10462-015-9435-9 11. Ciatto, G., Calegari, R., Omicini, A.: 2P- KT: a logic-based ecosystem for symbolic AI. SoftwareX 16(100817), 1–7 (2021). https://doi.org/10.1016/j.softx.2021. 100817 12. Cossentino, M., Lopes, S., Nuzzo, A., Renda, G., Sabatucci, L.: A comparison of the basic principles and behavioural aspects of Akka, JaCaMo and Jade development frameworks. In: Proceedings of the 19th Workshop “From Objects to Agents”. CEUR Workshop Proceedings, vol. 2215, pp. 133–141. CEUR-WS.org (2018). http://ceur-ws.org/Vol-2215/paper_21.pdf Multi-agent Cooperative Argumentation in Arg2P 13. Denti, E., Omicini, A., Ricci, A.: Multi-paradigm Java-Prolog integration in tuProlog. Sci. Comput. Program. 57(2), 217–250 (2005). https://doi.org/10.1016/j.scico. 2005.02.001 14. Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–358 (1995). https://doi.org/10.1016/0004-3702(94) 00041-X 15. García, A.J., Simari, G.R.: Defeasible logic programming: an argumentative approach. Theory Pract. Logic Program. 4(1–2), 95–138 (2004). https://doi.org/10. 1017/S1471068403001674 16. Hewitt, C., Bishop, P.B., Steiger, R.: A universal modular ACTOR formalism for artificial intelligence. In: 3rd International Joint Conference on Artificial Intelligence, pp. 235–245. William Kaufmann (1973). http://ijcai.org/Proceedings/73/ Papers/027B.pdf 17. Hulstijn, J., van der Torre, L.W.: Combining goal generation and planning in an argumentation framework. In: International Workshop on Non-monotonic Reasoning (NMR 2004), pp. 212–218 (2004). https://www.pims.math.ca/science/2004/ NMR/papers/paper28.pdf 18. Jung, H., Tambe, M., Kulkarni, S.: Argumentation as distributed constraint satisfaction: applications and results. In: 5th International Conference on Autonomous Agents (Agents 2001), pp. 324–331 (2001). https://doi.org/10.1145/375735.376322 19. Krippendorff, K.: Intrinsic motivation and human-centred design. Theor. Issues Ergon. Sci. 5(1), 43–72 (2004). https://doi.org/10.1080/1463922031000086717 20. Modgil, S., Caminada, M.: Proof theories and algorithms for abstract argumentation frameworks. In: Simari, G., Rahwan, I. (eds.) Argumentation in Artificial Intelligence, pp. 105–129. Springer, Heidelberg (2009). https://doi.org/10.1007/ 978-0-387-98197-0_6 21. Modgil, S., Prakken, H.: The ASPIC+ framework for structured argumentation: a tutorial. Argument Comput. 5(1), 31–62 (2014). https://doi.org/10.1080/ 19462166.2013.869766 22. Oliva, E., McBurney, P., Omicini, A.: Co-argumentation artifact for agent societies. In: Rahwan, I., Parsons, S., Reed, C. (eds.) ArgMAS 2007. LNCS (LNAI), vol. 4946, pp. 31–46. Springer, Heidelberg (2008). https://doi.org/ 10.1007/978-3-540-789154_3 23. Oliva, E., Viroli, M., Omicini, A., McBurney, P.: Argumentation and artifact for dialogue support. In: Rahwan, I., Moraitis, P. (eds.) ArgMAS 2008. LNCS (LNAI), vol. 5384, pp. 107–121. Springer, Heidelberg (2009). https://doi.org/10.1007/9783-642-00207-6_7 24. Pisano, G., Calegari, R., Omicini, A., Sartor, G.: A mechanism for reasoning over defeasible preferences in Arg2P. In: Monica, S., Bergenti, F. (eds.) CILC 2021 Italian Conference on Computational Logic. Proceedings of the 36th Italian Conference on Computational Logic. CEUR Workshop Proceedings, Parma, Italy, vol. 3002, pp. 16–30. CEUR-WS (2021). http://ceur-ws.org/Vol-3002/paper10.pdf 25. Vasconcelos, W.W., Sabater, J., Sierra, C., Querol, J.: Skeleton-based agent development for electronic institutions. In: 1st International Joint Conference on Autonomous Agents and Multiagent Systems: Part 2 (AAMAS 2002), pp. 696–703. ACM, New York (2002). https://doi.org/10.1145/544862.544911 Ethics by Design for Intelligent and Sustainable Adaptive Systems Luca Squadrone, Danilo Croce(B) , and Roberto Basili(B) Department of Enterprise Engineering, University of Roma, Tor Vergata, Via del Politecnico 1, 00133 Rome, Italy {croce,basili}@info.uniroma2.it Abstract. AI systems are increasingly dependent on the data and information sources they are developed with. In particular, learning machines are highly exposed to undesirable problems due to biased and incomplete coverage of training data. The autonomy exhibited by machines trained on low-quality data raises an ethical concern, as it may infringe on social rules and security constraints. In this paper, we extensively experiment with a learning framework, called Ethics by Design, which aims to ensure a supervised learning policy that can pursue both the satisfaction of ethical constraints and the optimization of task (i.e., business) accuracy. The results obtained on tasks and datasets confirm the positive impact of the method in ensuring ethical compliance. This paves the way for a large set of industrial applications, whose ethical dimension is critical to increasing the trustworthiness with respect to this technology. Keywords: Ethical issues of AI · Ethics by design in machine learning · Bias in deep learning · Empirical evaluation of ethical AI systems Machine learning applications are experiencing exponential growth and are now being implemented in high-risk ethical scenarios, such as lending, hiring, or legal decision support [22]. The clear advantages of using machine learning algorithms include the ability to quickly and accurately analyze large amounts of data. However, this paves the way for algorithms to generate discriminatory predictions against individuals or social groups [1,2,6,23]1 , as per the bias inherent in the way historical data are collected. Consider, for example, COMPAS, a system used as a support tool by judges to predict a defendant’s risk of recidivism. African American defendants have 1 The articles for [2] and [23] can be found respectively at https://www.propublica. org/article/machine-bias-risk-assessments-in-criminal-sentencing and https:// www.aclu.org/blog/privacy-technology/ c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 154–167, 2023. https://doi.org/10.1007/978-3-031-27181-6_11 Ethics by Design for Intelligent and Sustainable Adaptive Systems been found to be exposed to a higher risk of recidivism than Caucasian defendants due to unbalanced representation in historical data. This is further evidenced by recent studies [1,2,6,12,16,23]2 which have shown how machine learning algorithms may emphasize human factors such as prejudices, cliches, and errors of assessment. Since the algorithms are based on mathematical and statistical principles, they cannot independently recognize the ethical values related to the fair treatment of races or genders. This negatively impacts the trust with respect to this class of methods, especially in critical scenarios. Therefore, it becomes critically important that machines are somehow able to make data-driven decisions aligned with human values and expectations, in order to avoid the risk of dangerous drifts in terms of ethics and human values. The framework proposed in [21], while extending the generic applications of AI, focuses primarily on learning ethical behavior by numerical optimization, that is, through a deep neural model. The core idea is to model ethics as automated reasoning over formal descriptions of fair decisions, e.g., ontologies, but making it available during the learning stage. Note that this approach does not induce a set of ethical rules from a set of observable behaviors, but rather does the opposite. This approach takes for granted an explicit formulation of ethical principles (as done, for example, in earlier work [5,24]) and focuses on a form of ethical learning as external alignment (learning from others, [15]). It uses evidence inferred from an ethical ontology to guide model selection during the training process. The resulting deep neural network jointly models the functional and ethical conditions that characterize the underlying decision-making process. In this way, the discovery of latent ethical knowledge (that is, information hidden in the data that is meaningful from an ethical perspective) is enabled and made available to the learning process. Instead of relying on simulation to proceed in ethical decisions [24], the adopted framework integrates the acquisition of high-quality inference abilities that simultaneously reflect ethical expectations. In other words, the learning machine is expected to select the “best decision” among those that are also ethically sustainable. In this work, we test the beneficial impact of the above Ethics by Design technology3 on five well-known datasets by (1) adopting ethical principles, that allow the ethical encoding of original instances into a space corresponding to ethical properties, and (2) by reformulating the learning function to favor decisions that better balance operational (i.e., business) efficiency and ethical compliance. The proposed experiments adopt ethical principles in form of task-specific ethical rules that constrain the learning algorithm through the definition of dedicated preferences, the so-called truth-makers, as in [21]. We measured the impact of the Ethics by Design approach by showing the effectiveness of parameterization and “tweaking” of ethical constraint weights. 2 The study referred by [12] is available at: https://www.bloomberg.com/graphics/ 2016-amazon-same-day/. The code is made available at: https://github.com/crux82/nn-ebd. L. Squadrone et al. As a result, we show that in all data sets, i.e., tasks, ethical conditions, and domains, a large improvement in ethical behavior (lower ethical risks) can be achieved at the cost of a small reduction in accuracy. In the remainder of the article, ethical issues in example-based machine learning approaches are first presented in Sect. 2. Section 3 summarizes the Ethics by Design approach, as a neural architecture that applies ethical constraints during the training process. In Sect. 4, experimental results are reported. Finally, in Sect. 5 the conclusions are drawn. Ethics in Inductive Decision Systems Ethics in Different Application Scenarios Regardless of their effectiveness, ethical concerns are raised about the autonomy exhibited by machines trained on (possibly limited) data and their potential to violate social rules and security constraints. A first example involves Amazon’s recruitment algorithm, which is used to automatically screen candidates’ curricula during the selection process. As indicated by the Business Insider report4 , this algorithm was found discriminatory against women, particularly in professions requiring technological skills. This bias was introduced by the data (i.e., real curricula) used in training: these were mostly related to male candidates, so the algorithm overweighted the contribution of candidate gender-related characteristics. In [6], the output of facial recognition algorithms released to the market by three major tech companies showed a significant racial and gender bias: these methods had very low error rates (never more than 0.8%) in determining the sex of light-skinned men, but when applied to dark-skinned women this increased to ranges of 20% and 34%. In automatic recommendation, the analysis presented in [1] suggests that the algorithm adopted by Facebook for recommendation also applies racial and gender biases when offering ads to more than two billion users, based on their demographic information. Similar issues are surveyed in [23]. As a consequence, growing attention is paid to the analysis of “sensitive features” (e.g., gender, ethnicity, and age) to identify and limit undesirable effects of bias, discrimination, or prejudice, as surveyed in [8]. Several studies have shown that the definition and acquisition of a dataset affected by (any kind of) bias significantly affect the quality of a data-driven method trained on it, as discussed below. The COMPAS 5 (Correctional Offender Management Profiling for Alternative Sanctions) dataset discussed in [20] was released by ProPublica in 2016 based on the Broward County data. It assigns people a recidivism risk score that is computed using the defendant’s responses to the COMPAS screening survey. This dataset is generally used to train machine learning algorithms that predict 4 www.businessinsider.com/amazon-built-ai-to-hire-people-discriminated-againstwomen-2018-10. https://github.com/propublica/compas-analysis. Ethics by Design for Intelligent and Sustainable Adaptive Systems if an individual will be arrested again within two years after the first arrest. According to ProPublica’s analysis [2], African Americans are more likely than Caucasians to be mislabeled as being at higher risk. The German credit dataset 6 is defined to represent bank account holders and it is used in automatic risk assessment prediction, that is, to determine whether or not it is risky to extend credit to a person. The potential ethical risk of deriving a data-driven model that makes it difficult to lend to women, youth, or foreign workers is generally discussed, as in [20]. The Adult dataset 7 was derived from U.S. Census data in 1994. It includes attributes describing social information about registered citizens (in terms of age, race, sex, or marital status) and is generally used to determine whether a person’s annual income exceeds 50, 000 US dollars. As discussed in [20], this dataset is subject to bias, as automatic classifiers generally overweight information about the sex and race of the individuals being considered. The Default Credit Card Clients 8 dataset investigated the customers’ default payments and contains payment information, demographics, credit data, and payment history. The goal is to predict whether or not a client will default in the next month. However, as suggested by [20], women are penalized compared to men. The Law School Dataset 9 is defined after the survey conducted by the Law School Admission Council (LSAC) across 163 law schools in the United States. The dataset contains the law school admission records and it is generally used to predict whether a candidate would pass the bar exam or to predict a student’s first-year average grade (FYA). As discussed in [20], this prediction is generally biased by features like the gender or the race of the candidates. 2.2 Computational Methods for Fair Inductive Systems When training machine learning methods over potentially unfair datasets, much of the discussion focuses on various solutions to reduce “bias” in algorithms, such as modifying training data or diversifying data sources to reduce disparities between groups [10,11]. However, research such as [17]10 suggests that such approaches may fail when it is difficult to isolate protected attributes from data. As extensively discussed in [8,19] and [18], methods to reduce bias effects fall under three categories: pre-processing, in-processing, and post-processing algorithms. Pre-processing methods manipulate the training dataset before training a model, under the assumption that changing input data can prevent the insurgence of undesirable effects. In-processing methods modify the learning machine 6 7 8 9 10 https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data). https://archive.ics.uci.edu/ml/datasets/adult. https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients. https:// storage.googleapis.com/lawschool dataset/bar pass prediction.csv. The study referred by [17] can be found at: https://hai.stanford.edu/sites/default/ files/2021-12/ Policy%20Brief%20-%20Risks%20of%20AI%20Race%20Detection %20in%20the%20Medical%20System.pdf. L. Squadrone et al. itself, while post-processing techniques modify the decisions made by a given machine. An example of a pre-processing approach is presented in [13], where a classification model is learned on biased training data but works impartially for future data: a ranking function is learned to detect biased data that must be “sanitized”, to learn a non-discriminatory model. In [10] a different approach is defined to modify each attribute, so that the marginal distributions based on subsets of that attribute characterized by a given sensitive value are all the same, without changing the target training labels. On the other hand, [7] and [11] present post-processing methods. Rather than changing a training pipeline, they propose post-processing frameworks for measuring and removing discrimination based on “protected” attributes. In-processing methods do not directly process input/output data, but instead, extend a machine learning algorithm so that (in addition to the originally targeted task) they also consider one or more additional tasks reflecting some sort of ethical principles. Such an extension is generally based on regularization, constraint optimization, or adversarial learning techniques. In [3,4,9,14], authors outline a general framework for empirical risk minimization under fairness constraints, such as the introduction of specific regularizers. Regardless of the type of approach used among those discussed so far, the goal is always to minimize the negative effect of sensitive variables during the training process. In practice, adding constraints generally results in a trade-off: a fairer algorithm at the cost of small drops in accuracy in the original problem. Inspired by [21] and [13], we are interested here in methods that allow controlling this trade-off between system performance (in terms of accuracy on the target task) and ethics. We extensively investigate the method in [21], a neural framework that allows us (i ) to directly control the trade-off between accuracy and ethical principles and (ii ) to explicitly define these principles in terms of truth-makers, described hereafter. 2.3 Ethics, Principles and Truth-Maker s Ethical approaches to data-driven applications aim at minimizing the undesirable effects that learning processes may introduce on the acceptability of the resulting decisions. This “ethical acceptability” is often related to principles that establish norms over the decisions. Violations of principles correspond to an imbalance in the treatment of equal rights among individuals, i.e., ethical risks, or missed opportunities for individuals or social groups, i.e., reduced benefits. The idea in [21] is to introduce the notion of a truth-maker as a model for representing ethical risks and benefits and exploiting them during the training process of a learning algorithm. We promote an ethical approach by assuming that reasoning over ethical ontologies is carried out through principles that apply as truth-makers. As an example, the application of an ethical principle such as “All minorities must be protected and ensured by equal rights” to an inductive classification method C is based on training datasets where some social groups, e.g., women, Ethics by Design for Intelligent and Sustainable Adaptive Systems are somehow disadvantaged. Women might be discriminated against by C, such as when they take out a loan: an ethical constraint might be in this case to assume an ethical advantage when the loan is given to a woman. From a computational perspective determining such an advantage requires some explicit “rules”, that work as a constraint for the learning process without any manipulation of the training data. In this way, two aspects are optimized: on the one side, the quality of future decisions should reflect the past ones (as we usually do by optimizing accuracy), and, on the other end, they should also satisfy ethical compliance, i.e., work on minimizing ethical risks and maximizing any potential ethical benefit. In the ProPublica case, as the COMPAS dataset suggests, African Americans are more often mislabeled as being at higher risk than Caucasians. An ethical principle that may be used against this potentially unfair situation could be expressed as “Safeguard the minority of African Americans from discriminatory classification can be achieved by avoiding severe decisions when identifying them as repeat offenders.” This principle suggests a constraint favoring situations in which it is particularly beneficial to protect a minority, such as African Americans. At the same time, decisions about African Americans being repeat offenders are also risky, because of the community’s potential social characterization. In fact, the COMPAS dataset contains the variable race (expressing if the individual is African American or Caucasian), which seems to suggest that African Americans are positively correlated with the repeat offender class on average: however, race should not be linked to such bias and the following principle can be used to counterbalance this trend:“We expect there is a substantial benefit and low risk in classifying an African American as a non-repeat offender”. Rules used to summarize the above principles sentences can be derived for “NON-Repeat offender” decisions: – the Benefit in categorizing an African American as a NON-repeat offender is high; – the Risk in classifying an African American as a NON-repeat offender is low ; as well as “Repeat offender” decisions: – the Benefit in classifying an African American as a repeat offender is very low ; – the Risk in classifying an African American as a repeat offender is very high. The above rules are typical examples of truth-makers, as constraints on the decisions about recidivism based on the race feature. Notice that the adjective low, very high or high are vague but can be easily translated into fuzzy sets, as subjective models of the two meta-variables expressing the Risk and Benefit of any individual decision, as formalized in [21]. Formalizing Principles, Rules and Truth-Makers The core of the adopted approach [21] is to model ethics via automated reasoning over formal descriptions, e.g., ontologies, but by making it available during the L. Squadrone et al. learning stage. We suggest the explicit formulation of ethical principles through truth-makers and let the resulting ethical evidence to guide the model selection of deep learning architecture. This network will jointly model causal as well as ethical conditions that characterize optimal decision-making. In this way, rules (i.e., truth-makers) are used to estimate risks and benefits connected to training cases; then the discovery of latent ethical knowledge, i.e., hidden information in the data that is meaningful under the ethical perspective, is carried out; finally, the latter evidence is made available when learning the target decision function. This framework results in a learning machine able to select the best decisions among those that are also ethically sustainable. As already exemplified, abstract ethical principles can be enforced through Ethical Rules that constrain individual features (e.g., gender or race) and determine the degree of the ethicality of decisions. Ethical Rules usually depend on one or more features and assign values (or better, establish some probability distributions) over the domains of some features. These rules are termed as truth-makers (T M), as they account for the possibly uncertain ethical state of the world determined by decisions over individual instances. Truth-makers are thus rules of an ethical ontology EO that actively determine the ethical profile of a decision d(i) over an input instance i, e.g., an individual associated to therepeat offender class in the COMPAS dataset. In particular, given a pair i, d(i) , a truth-maker tm will determine a probability distribution to the set of ethical benefit and ethical risk dimensions. For every tm, ethical dimension ej (i) and possible ethical value vk ∈ V , e.g. low or high risk11 , the following probability is defined: P ej (i) = vk | i, d(i) , tm ∀j, ∀k = 1, . . . , 5 which expresses the evaluation of the truth-maker tm onto the representation i of an instance i. Here d(i) denotes the decision over the i-th instance and k-th is the value of the j-th ethical dimensions (constrained by the truth-maker). A truth-maker thus assigns probabilities to the ethical signature of an individual i for all possible combinations of business characteristics i and decisions d(i); if no truth-maker is triggered by an instance the uniform probability distribution 1 , over the values vk and different u is used, i.e., P ej (i) = vk | i, d(i) , tm = m ethical features, i.e., ∀j, k. Multiple truth-makers can contribute to a given ethical feature ej (i) by individually biasing their overall probability P (ej (i)). When all truth-makers are fired, the resulting ethical signature es(i) over an instance i and its decision d(i) consists ∀j, k: P tm|EO P ej (i) = vk | i, d(i) , tm esj (i) = tm 11 Consistently with [21] for both benefits and risks, we fixed m = 5 and limit values in the [0, 1] range. The following five labels can be adopted {“very low”, “low”, “mild”, “high”, “very high”} corresponding to the numerical values v1 = 0.1, v2 = 0.25, v3 = 0.5, v4 = 0.75 and v5 = 0.9. Ethics by Design for Intelligent and Sustainable Adaptive Systems The Deep Network Architecture. The network consists of several components, each trained under different constraints expressed by specific loss functions (Fig. 1). In the first component, an Ethics Encoding network is defined, as responsible for learning the combinations of input features that capture possible relationships between business observations and (desired or undesired) ethical consequences: the network acts as an encoder of ethical consequences (i.e., further features) for each instance i. A second component includes two networks: a Business Expert and an Ethics Expert, whose roles are respectively to independently estimate the distributions of suitable business decisions, on the one side, and predict as well their ethical consequences. In the final component, an Ethical-aware Deep Neural Network (DNN) is responsible for estimating the joint probability of possible triplets (decision, benefit, risk), which determines the risks and benefits associated with individual decisions for each instance. Fig. 1. Network architecture proposed in [21] This last component produces the final business decision of the network by applying a certain decision policy over risks and opportunities. Different policies are possible: from rejecting all decisions that are not ethically adequate (above thresholds imposed to the probability of risks and benefits) to selecting other specific trade-offs between business accuracy and ethical compliance. Policies are designed as different loss functions used to train the specialized sub-networks, i.e., the Business Expert and the Ethics Expert. This architecture formulation allows thus to emphasize the contribution of each triple in the probability estimation through a factor β (the exponential Tweaking factor in [21]): in this way, we can train the overall network by balancing business accuracy and ethical compliance. Emphasis on ethical consequences can be achieved by amplifying the ethics constraints, i.e., by tweaking β toward larger values. Notice that training data usually provide discrete (i.e., crisp) business decisions, that do not give rise to any uncertainty. However, these are not guaranteed to be ethical decisions. Introducing probability distributions for all possible outcomes and smoothing them towards the non-gold decisions allows us to disregard unethical cases and reserve some probability to decisions di different from the gold standard ones. L. Squadrone et al. Several policies exist to derive the final decisions: in this work, this is derived only from the probability triplets that respect the ethical constraints. For more details, refer to the paper [21]. Evaluating Ethical Compliance in Inductive Learning The effectiveness of the investigated method is evaluated using five well-known datasets, always preserving the architecture across them, while defining taskspecific truth-makers reflecting different ethical principles. To verify the effectiveness of the “tweaking” parameter in controlling the trade-off between the task-specific accuracy and the sensitivity to ethical principles, we systematically measure the system in a range with β ∈ {0.001, 0.03, 0.05, 0.07, 0.1, 0.12, 0.14}, where higher values for β correspond to more influential ethical losses during training. Each dataset is divided into three parts: a test set (10%), and the remaining 90% in a validation set (10%), and a training set (90%). To assess whether or not the decisions made by our model are also respecting the ethical ontology in use, a measure, namely Ethical Compliance (EthCompl), is + computed as D+D+D− , where D+ represents the number of ethically compliant instances and D− the non-compliant ones. Finally, as in [25], we adopted disparate mistreatment to measure the change in bias. A decision-making process is suffering from disparate mistreatment concerning a given sensitive attribute (e.g., race) if the misclassification rates differ for groups of people having different values of that sensitive attribute (e.g., Afro-Americans vs. Caucasians). The following equation y = y | z = 0, y = −1) DF P R = P (ˆ − P (ˆ y = y | z = 1, y = −1) DF N R = P (ˆ y = y | z = 0, y = 1) − P (ˆ y = y | z = 1, y = 1) quantifies the disparate mistreatment incurred by a classifier, whereas the closer the values of DF P R and DF N R to 0, the lower the degree of disparate mistreatment. 4.1 Use Cases We now describe the different investigated datasets, emphasizing the targeted sensitive features and adopted truth-makers. The COMPAS Use Case. We selected the subset of instances completely defined in COMPAS, obtaining a subset of 6,908 samples. The target variable is recid indicates whether a defendant committed a crime in the two years after he was scored. The definition of the truth-maker focused on the sensitive attribute race, so that it assigns a high benefit in classifying African Americans as not recidivists, a high risk in classifying them as recidivists while not acting Ethics by Design for Intelligent and Sustainable Adaptive Systems on the other subpopulations, such as Caucasians (no benefit and risks assigned to the other subpopulations). The German Credit Use Case. This dataset contains 1, 000 examples, each described by 21 attributes, where the target variable default indicates good or bad customers. The truth-maker focused on the sex attribute (derived from personal-status-and-sex) assigning high benefit in classifying females as good customers, a high risk in classifying them as bad customers while not acting on males. The Adult Use Case. The Adult dataset consists of 48, 842 instances, each described via 15 attributes. The target boolean variable y indicates whether the annual income of a person exceeds 50, 000 US dollars. The truth-maker focused on the sex attribute, by assigning low benefits in classifying females as “under 50, 000 US dollars”, and low risk in classifying them as “over 50, 000 US dollars” while not acting on males. The Default Credit Card Use Case. The dataset includes 30,000 customers described by 24 attributes. The target variable default indicates whether a customer will suffer the default payment situation in the next month (1) or not (0). The truth-maker focused on the sex attribute, by assigning high benefits in classifying males as “NOT default”, a high risk in classifying them as “default” while not acting on females. The Law School Use Case. The Law school dataset has 26, 553 instances, where the target variable pass bar indicates that a person passes the bar exam. The truth-maker focused on the race attribute, by assigning low benefits in classifying other races as “NOT passed the exam”, low risk in classifying them as “passed” while not acting on white. 4.2 Discussion of the Results Table 1 reports the experimental results. Cross-validation has been applied to study the behavior of Accuracy and EthCompl scores according to different values of β. The first line for each dataset in the table shows the performance of a Multi-Layer Perceptron (MLP) whose loss does not depend on any ethical dimension of the problem. This is compared with the proposed ethical networks achieved with different settings of the β parameters, whose role is to increase the impact of ethical constraints. The overall experimental outcome strongly confirms the ability of the network to learn ethical constraints. In fact, in any of the targeted datasets, the measure of ethical compliance EthCompl grows as long as β (which emphasizes the impact of the ethical component of the network on the loss) increases. At the same time, disparate mistreatment also seems to be reduced: this is shown by the last pairs of columns in Table 1 (namely Disp. mistr.) and by the false positive rates on protected groups that are comparable to the corresponding rate on non-protected groups (e.g., African Americans vs. other races in COMPAS). This is exactly the impact of unfair decisions we expect. The fact L. Squadrone et al. Table 1. Results by varying the parameter β on the COMPAS, German Credit, Adult, Default and Law school datasets. Values express the average over 5 different runs. COMPAS β Afr. Americans Disp. mistr. DFPR German Credit β Disp. mistr. Adult β Disp. mistr. Default β Disp. mistr. Law school β Disp. mistr. Ethics by Design for Intelligent and Sustainable Adaptive Systems this effect is systematic across all the analyzed datasets is a strong evidence of the proposed method as an effective and reliable in-process approach to fairness. These datasets in fact represent quite different tasks and domains characterized by different sensible features as well as by different data distributions. As already noticed, the proposed Ethics by Design inevitably faces some drop-in (business) accuracy, to adjust unfair training data (i.e., gold decisions to be neglected for sake of fairness). However, such a small loss in accuracy corresponds to more balanced (i.e., ethical) decisions: for example, a 0.682 vs. 0.814 increase in ethical compliance in the COMPAS dataset corresponds to a small accuracy loss, 0.681 vs. 0.640. It seems that tweaking the ethical sensitivity of the method is thus effective. It allows identifying the optimal balance, as an operationally cost-effective compromise, between the business and the ethical performance of the system. The injection of ethical rules within neural learning seems to be effective in balancing biases that arise within datasets. Biased human judgments are the main cause of errors as statistical surveys suggest. The ethical rules we have defined have reduced this distortion, leading to more ethically effective outcomes. Although not conclusive, this approach results in an improvement. The suggested framework allows the management of incoming data based on an ethical perspective. When operational decisions are monitored across time, further adjustments through training are possible and incremental ethical optimization is enabled. In this work, we experimented with the Ethics by Design framework, discussed by [21], against quite different biased datasets, such as COMPAS. The tests confirm the method’s ability to strongly foster fairness, in order to ensure responsibility and accountability of AI systems’ behavior. For example in COMPAS, the results are much better decisions over African Americans, without costs, i.e., with basically no change on any other social group. This result is systematically achieved in the different datasets adopted at the expense of a more than acceptable loss of (business) performance, which in our view is a very significant result. This confirms the large applicability of the Ethics-by-Design framework [21]. As a future extension, the automatic identification of sensible features and strategies adopted by the model to propose truth-makers against the corresponding “unfair” decisions is under investigation. The possibility of cross-validating the role of different features through quantitative assessment (i.e., the fairness measures proposed) makes it possible to assume an autonomous behavior for auditing the system in search of ethical balancing between social groups. L. Squadrone et al. References 1. Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., Rieke, A.: Discrimination through optimization: how Facebook’s ad delivery can lead to biased outcomes. Proc. ACM Hum.-Comput. Interact. 3(CSCW), 1–30 (2019) 2. Angwin, J., et al.: Machine bias (2016) 3. Bechavod, Y., Ligett, K.: Penalizing unfairness in binary classification. arXiv preprint arXiv:1707.00044 (2017) 4. Berk, R., et al.: A convex framework for fair regression. arXiv preprint arXiv:1706.02409 (2017) 5. Bonnemains, V., Saurel, C., Tessier, C.: Embedded ethics: some technical and ethical challenges. Ethics Inf. Technol. 20(1), 41–58 (2018). https://doi.org/10. 1007/s10676-018-9444-x 6. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR (2018) 7. Calders, T., Verwer, S.: Three Naive Bayes approaches for discrimination-free classification. Data Min. Knowl. Disc. 21(2), 277–292 (2010) 8. Caton, S., Haas, C.: Fairness in machine learning: a survey. arXiv preprint arXiv:2010.04053 (2020) 9. Donini, M., Oneto, L., Ben-David, S., Shawe-Taylor, J., Pontil, M.: Empirical risk minimization under fairness constraints. arXiv preprint arXiv:1802.08626 (2018) 10. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the KDD 2015, pp. 259–268 (2015) 11. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, vol. 29, pp. 3315–3323 (2016) 12. Ingold, D., Soper, S.: Amazon doesn’t consider the race of its customers. Should it? (2016) 13. Kamiran, F., Calders, T.: Classifying without discriminating. In: 2009 2nd International Conference on Computer, Control and Communication, pp. 1–6. IEEE (2009) 14. Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7524, pp. 35–50. Springer, Heidelberg (2012). https: //doi.org/10.1007/978-3-642-33486-3 3 15. Kleiman-Weiner, M., Saxe, R., Tenenbaum, J.B.: Learning a commonsense moral theory. Cognition 167, 107–123 (2017). Moral Learning 16. Lambrecht, A., Tucker, C.: Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of stem career ads. Manag. Sci. 65(7), 2966–2981 (2019) 17. Lungren, M.: Risks of AI race detection in the medical system (2021) 18. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021) 19. Pessach, D., Shmueli, E.: Algorithmic fairness. arXiv preprint arXiv:2001.09784 (2020) 20. Quy, T.L., Roy, A., Iosifidis, V., Ntoutsi, E.: A survey on datasets for fairness-aware machine learning. arXiv preprint arXiv:2110.00530 (2021) Ethics by Design for Intelligent and Sustainable Adaptive Systems 21. Rossini, D., Croce, D., Mancini, S., Pellegrino, M., Basili, R.: Actionable ethics through neural learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5537–5544 (2020) 22. Savani, Y., White, C., Govindarajulu, N.S.: Intra-processing methods for debiasing neural networks. arXiv preprint arXiv:2006.08564, vol. 33, pp. 2798–2810 (2020) 23. Snow, J.: Amazon’s face recognition falsely matched 28 members of congress with mugshots (2018) 24. Vanderelst, D., Winfield, A.: An architecture for ethical robots inspired by the simulation theory of cognition. Cogn. Syst. Res. 48, 56–66 (2018). Cognitive Architectures for Artificial Minds 25. Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1171–1180 (2017) Automated Planning and Scheduling Verification of Numeric Planning Problems Through Domain Dynamic Consistency Enrico Scala1 , Thomas L. McCluskey2 , and Mauro Vallati2(B) 1 Universit` a degli Studi di Brescia, Brescia, Italy 2 University of Huddersfield, Huddersfield, UK [emailprotected] Abstract. Verification of the development of complex problem models is an open problem in real-world applications of automated planning. To facilitate the verification task, this paper introduces the notion of Domain Dynamic Consistency for planning problems expressed in PDDL. This notion is aimed at signalling suspicious inputs arising at the intersection between the abstract description of the model and its concrete instantiation. Together with the notion we present an approximation based approach that is devoted to automatically solve the problem of deciding when a PDDL numeric planning problem is not Domain Dynamic Consistent. The paper terminates with an example of application of this notion and its related technique within a Urban Traffic Control scenario. Keywords: Automated planning · Numeric planning · Verification AI Planning is an important research area of Artificial Intelligence that deals with the problem of finding a sequence of actions whose application in an initial state of the environment leads to a desired goal state [12]. Automated planning is exploited in many real-world applications as it is a common capability requirement for intelligent autonomous agents [18]. Example application domains include drilling [11], smart grid [28], machine tool calibration [20], and mining [16]. Modelling AI planning problems is a challenging and error-prone tasks, as even small mistakes can compromise the validity of a representation. In realworld planning applications, where knowledge is acquired from different sources, the verification of the problem model is crucial. This may be caused both by some erroneous input done by the user, or by some automatic tool that does not work properly. For instance, one can simply forget to mention the initial value of a variable and this may indirectly cause some other variable to be not changeable anymore. Syntactic errors are easily recognised, whilst more profound interactions among the variables are difficult to intercept. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 171–183, 2023. https://doi.org/10.1007/978-3-031-27181-6_12 E. Scala et al. Verification of a problem model means demonstrating that it is a correct implementation of the abstract or conceptual model. One important aspect of this is checking that the implementation does not introduce errors or behaviours inconsistent with the conceptual model. To help address this problem, we propose the notion of Domain Dynamic Consistency (DDC) of a planning problem, and illustrate its use in problems expressed in the Planning Domain Definition Language (PDDL), the standard de-facto language used by the AI Planning community. Intuitively, we say that a planning problem is DDC if each variable that is present in the initial state is fluent in the same way in which it is fluent in the model of domain dynamics. Consider the problem involving a robot that can move in a metric uni-dimensional space. Assume that variable x is used to model its position, and that the movement of such a robot is modelled through a single move-right PDDL action, whose precondition requires the fuel to be at least of one unit. Further assume that the effects simply state that the position of the robot is increased by 1 unit anytime the action is applied. Now consider a state where the position of the robot is such that x = 1, and the fuel is equal to 1. The initial state, and therefore the planning problem, is DDC in that the only fluent variable that we are modelling can indeed be increased by 1 unit. Let us consider another situation. This time, assume a state has variable x set to 1 (as before) but the fuel is instead equal to 0. According to our definition, this state is not DDC in that the variable can never be increased. Although this does not represent an issue from a semantics perspective in that it is perfectly possible given the domain and the problem instance, this is somewhat a suspicious situation for an initial state; why would a state like this one make any sense at all if we cannot even model the movement of the robot? Why did we bother modelling its position and its modification, if this position cannot actually be changed? Though the illustration above is simple, in reality, when initial states are complex and/or auto-generated, this property helps to uncover errors in the verification and validation process. Other works have looked into the problem of verification and validation of planning problems, e.g., [3,8,22,27]. Yet, to the best of our knowledge, none has investigated the problem through the lens of planning with numeric information [9] and without the need to express some additional explicit knowledge (e.g., through Linear Temporal Logic [6,21]). In this study we formally characterise the notion of DDC focusing on PDDL 2.1, the extension of PDDL that lets the user explicitly express numeric conditions and numeric changes. First, we discuss the general difficulty of the apparently simple problem of checking problems for DDC, observing that in general terms deciding when an initial state is DDC is as hard as solving a planning problem. To overcome this barrier we present an approximation schema and show how this can be used to verify whether a planning problem is not DDC. Finally, we show an example of the use of the DDC in strategy generation for Urban Traffic Control. The remainder of this paper is organised as follows. Section 2 provides the necessary background. Then, the Domain Dynamic Consistency notion is intro- Domain Dynamic Consistency for Numeric Planning Problems duced, and Sect. 4 presents an approach to test the DDC property. The usefulness of the notion is then assessed using a case study, that is presented in Sect. 5. Finally, conclusions are given. This section provides the necessary background on numeric planning, the corresponding definition of a numeric planning problem, and on the additive intervalbased relaxation. 2.1 Numeric Planning We consider planning problems [12] as those that can be expressed in PDDL2.1 level 2 [9]. These problems are called numeric planning problems [25], but we will in the rest simply refer to planning problems. Without loss of generality, we restrict our attention to the case with untyped objects, and with only numeric fluents1 (see below). A full treatment of the syntax and semantics of the language is beyond the scope of this work; the full details can be found in [9]. Next, we provide only those aspects necessary to understand our proposal. A planning problem consists of two elements: a domain model and a problem model. Following the PDDL terminology, the domain model contains the definition of the predicates, the numeric fluents, and a set of actions. In particular, numeric fluents indicate properties of lists of objects; mathematically, they define mappings between lists of objects to numeric values. The domain model defines them in an abstract way: it specifies a name, a string label for each such mapping, and a list of variables. Variables specify the order and the number of objects to be mapped. An action a is defined by means of a name (which we will often omit in the interest of space), a list of variables (called the parameters of the actions), a precondition formula (i.e., pre(a)) and a set of effects (i.e., eff(a)). The precondition formula is a first-order logic proposition having equalities or inequalities involving numeric fluents as terms (e.g., (> (battery ?r1) 4) ∧ (> (battery ?r2) 5)). Each formula can make use of the standard logical connectives from propositional logic, i.e., ∧, ∨, ¬, together with arbitrary nesting of universal (∀) and existential (∃) quantifier over the objects of the problem. Effects are triplets of the form {inc, dec, ass}, x, ξ, where the first term is the modifier, and can either be an increase (i.e., inc), a decrease (i.e., dec) or an assignment (i.e., ass), the second term is a numeric fluent, and the third term is a numeric expression that together with the modifier determines the state of the numeric fluent if the action is applied. Each numeric fluent in the action structure can have its parameters expressed as concrete objects (i.e., actual objects of the problem to be solved) or variables. When all parameters are concrete objects, a numeric fluent is said to be ground. 1 A Boolean fluent can be mapped into a {0, 1} numeric fluent. E. Scala et al. Similarly, an action with all parameters and free variables substituted with concrete objects is said to be ground. This also requires to eliminate all quantifiers in the preconditions using standard quantifier elimination techniques. In this work we focus on actions whose effects can increase, decrease or assign the value to a numeric fluent by means of a constant (e.g., (increase (battery ?r1) 5.4)). A domain model is a tuple X , A where X is the numeric fluents set as above, and A the set of actions. Let O be a set of objects and x a numeric fluent from X . The grounding of x is the set of numeric fluents each having the same name of x but the list of variables replaced with concrete objects from some subset of O. The set of ground numeric fluents given O is denoted by X [O]. Finally, we use abs(x) to denote the abstraction of an object x into a variable, i.e., the ungrounded version of the numeric fluent. A state s gives a value to each numeric fluent in X [O]. The domain of each numeric fluent is the set of rational number plus the special term ⊥; ⊥ is used to state that a given numeric fluent is undefined. Let x ∈ X and s be a state, we denote with [x]s the value of numeric fluent x in state s. Then, we use succ(s) for the set of states reachable by s through actions from A. For more information on what a ground action is, and how actions can be grounded automatically, look at [14] and [26]. A ground action is applicable in state s iff its precondition is satisfied in s. A precondition is satisfied iff, by assigning all numeric fluents their values as for state s, the evaluation of the formula returns true. The application of a ground action in a state s generates a new state s = s[a] such that all numeric fluents conform with the effects of the action. For instance, if an action features a numeric effect inc, x, 1 and the state is such that x = 1, then the successor state will be such that x = 2. A problem model is given by a set of objects, a state, called the initial state, and a goal. The goal is structured as the precondition of an action, with the difference that any component which is not quantified only involves ground numeric fluents. A problem model is formally expressed as a tuple O, I, G. The combination of a domain and a problem instance is a planning problem P = D, P . A plan for a planning problem is a sequence of actions τ such that τ can be iteratively applied starting from the initial state I, and the last produced state is one where the goal G is satisfied. 2.2 Problem Relaxation and Heuristics A popular technique to finding plans for planning problems is that of performing a search over the state space induced by the problem. In order to make such a search effective, planners usually employ heuristics devised directly from the description of the problem, and a very solid approach to make that happen is to extract such a heuristic from a proper relaxation of the problem itself [5]. State space planners use these two facilities during search by avoiding the exploration of dead-ends states, and by steering the search only towards the most promising paths. Heuristics that well approximate the cost to reach the goal can lead the Domain Dynamic Consistency for Numeric Planning Problems search to only explore a linear number of states on the length of the optimal path. The Additive Interval-Based Relaxation (AIBR) of a numeric planning problem is a relaxation specifically designed to support problems involving complex numeric expressions. As many other relaxations (e.g., [5,7,25]), the AIBR serves two purposes in state-space planners: the former is to prune states and the latter is providing the basis for computing heuristic estimates [2,15,24]. Pruning is given by the ability of the AIBR to correctly identify when a state does not allow the planner to reach the goal. Heuristic estimates can be computed by finding concrete relaxed plan, that is, plans that solve the problem at a relaxation level. As hinted at above, the additive interval-based relaxation belongs to the family of frameworks that tries to exploit as much as possible the structure of the problem expressed in some language, in our case PDDL. This means that the user can take advantage of induced heuristics without the need of providing them manually. The relaxation at the basis of the AIBR grounds on a reinterpretation of the semantics of the numeric planning problem. Such a reinterpretation guarantees to over-approximate whether some goal or subgoal is reachable. Indeed AIBR is able to correctly identify unsolvable problems with an algorithm that is polynomial on the size of the problem. It does so with the following expedients. First, under AIBR, a planning state is not a single valuation for each numeric fluent. Rather, each numeric fluent x is mapped into an interval (x− , x+ ) defining the minimum (i.e., x− ) and the maximum (i.e., x+ ) value for x; this way, a AIBR planning relaxed state approximates a concrete state with a number of intervals. Each such interval approximates all values that can ever be attained by a single numeric fluent. Second, the AIBR changes the way satisfiability of a formula is evaluated. Instead of operating using standard arithmetic operations, it uses interval analysis [19]. That is, let s be some state, an inequality in some formula is evaluated using interval enclosures of the possible evaluation of the numeric fluents it encompasses. Then a generic propositional formula is evaluated by combining the evaluated terms recursively navigating a tree-shaped formula up to the root. Finally, whenever an action is applied in the AIBR, the result is given by the convex union of the interval for each variable associated with the state in which the action is applied, and the interval associated to the state obtained by applying the effects of the action. This way, the successor state monotonically accepts the values of the state in which the action is applied, and the new values that can be obtained by the execution of the action. Because of this, all formulas that are satisfied before the execution of the action are also satisfied after its application. To make this process run for a finite number of times, the AIBR makes use of the notion of asymptotic supporters. Intuitively, each asymptotic supporter makes the effect of an action idempotent, therefore limiting the number of iterations needed to estimate the relaxed reachability of a condition. The AIBR is not the only heuristic seen in the literature. For instance, [25] defines subgoaling-based relaxations that work with a different principle. Albeit E. Scala et al. such relaxations can provide more guidance, they are focused more on improving on the performances of state-space planners. The AIBR on the other hand aims at handling general numeric planning problems, which is what we target in this paper. Domain Dynamic Consistency Modelling planning problems using abstract, parametrized actions (also known as lifted actions) is very convenient. Indeed, one may encode compactly several actual transitions by just declaring the types of the variables the actions depend on. However, the plans that are going to be executed are composed by ground actions only. That is, actions where all variables are substituted with concrete objects from some particular problem model. While the modelling of abstract actions make things much more elegant, it may introduce some false expectations too. We argue that when one model an action at an abstract level, it is very likely that if some set of objects compatible with that action have most but not all object relevant conditions in the action preconditions satisfiable, some modelling bug may have occurred at the level of the problem formulation. And this may be related to the fact that one condition that we were expecting to be satisfiable at some point, it is actually not satisfiable because it does not follow the dynamics that we were expecting at an abstract level. To capture situations as this one, we formalise the notion of Dynamic Domain Consistency. Roughly speaking we say that a problem is dynamic domain consistent if and only if, whenever we have some object fluent that is expected to be dynamic at an abstract level, this object is dynamic at a ground level too. Though, we focus our attention on numeric fluents only, as we expect these can be the main source of domain inconsistencies. In what follows we formalise the notion of Domain Dynamic Consistency (DDC). DDC is a property that is desired by some particular state. Such a notion makes sense when the state is evaluated in a planning problem context. Definition 1 (Domain Dynamic Consistency). Let P = D, P be a planning problem such that D = X , A and P = O, I, G. We say that P is Domain Dynamic Consistent (DDC) iff ∀x ∈ X [O] it holds that – if ∃inc, y, k ∈ eff(a) for some a ∈ A with k > 0 s.t. y = x or y = abs(x) then ∃s ∈ succ(I) s.t. [x]I < [x]s – if ∃dec, y, k ∈ eff(a) for some a ∈ A with k > 0 s.t. y = x or y = abs(x) then ∃s ∈ succ(I) s.t. [x]I > [x]s – if ∃ass, y, k ∈ eff(a) for some a ∈ A with k = [x]I s.t. y = x or y = abs(x) then ∃s ∈ succ(I) s.t. [x]s = k Intuitively, the notion establishes that a planning problem is DDC if each numeric fluent mentioned in the initial state is dynamic, i.e. if actions in the domain model enable the numeric fluent to dynamically change, at an abstract level. We are interested in determining if that is the case. To understand Domain Dynamic Consistency for Numeric Planning Problems this property is generally true for well formed and operational planning problems, we considered a range of well known numeric benchmark instances [23]. The set includes the following domains: Counters, Plant-watering, Block-grouping, Sailing, and Farmland. We manually checked all the instances of the benchmarks, and observed that all of them are DDC. In all the considered instances, all the numeric fluents that can be modified via actions are indeed initially set to be modifiable. This empirical evidence gives a solid ground to support our intuition, and suggests that it can provide a meaningful way to verify initial states. Of course, the considered instances are very easy to be checked, given their simple structure. Yet, and that is also where the DDC notion can be helpful, real-world planning applications can lead to problem models that are complex and large. An example will be given in Sect. 5. It can be proven that, in general, checking the DDC is indeed much more involved. Proposition 1. Deciding whether a planning problem is DDC is undecidable. Proof (Sketch). Observe that deciding whether a planning problem is DDC is as hard as finding a solution plan for it. Indeed, we can emulate a planning problem by encoding the goal into the precondition of a dummy action having a single numeric effect. Then we make sure that this action is necessary to solve the problem. To do so we introduce a fresh numeric fluent initially set to a random number, say 0, and model a numeric effect for this action to set the fresh numeric variable to 1. Checking whether this problem is DDC necessitates making sure that the precondition of this action is achievable. Therefore, this is possible iff the original problem admits a solution. As numeric planning is undecidable [13], so is the problem of verifying whether a planning problem is DDC. Approximating Domain Dynamic Consistency To overcome the complexity of determining if a planning problem P is DDC, we approximate the DDC checking through the additive interval-based relaxation [1,24]. We make use of the AIBR for a different purpose than that employed in state-space planners (e.g., [15,24]). Our objective is not to provide a heuristic estimate or doing pruning. Instead, we aim at evaluating the DDC of a problem. As a very first step, we run the AIBR up to fix point – note that such a fix point does exist and can be computed efficiently because of the use of asymptotic supporters. This gives us an interval for each associated variable. Then we use such intervals to predict whether the conditions of Definition 1 are satisfied. More precisely, for each variable for which we know that there exists an action that can change its value abstractly, we see whether this may happen also at the ground level. We do so for each of the conditions that we want to evaluate. Algorithm 1 reports the AIBR reachability algorithm [24], slightly modified to return the last relaxed state obtained after fix-point computation. Algorithm 2 describes how to use Algorithm 1 to approximate the DDC of a problem w.r.t. a E. Scala et al. Algorithm 1: AIBR (slightly revisited from Scala et al. 2016) Input: P ++ Output: The set of intervals at the asymptotic fix-point Ω = supporters of A. s+ = s+ 0 . S = {a ∈ Ω : s+ |= pre(a)} while S = ∅ do s+ = succ+ (s+ , S) Ω = Ω\S S = {a ∈ Ω : s+ |= pre(a)} return s+ Algorithm 2: DDC Approximation Input: P = D, P Output: Is P Domain Dynamic Consistent? Pg = grounding(P) s+ = AIBR(Pg++ ) foreach x ∈ XP do foreach a ∈ AD such that ∃x , + =, k ∈ eff(a).x = abs(x) ∧ k > 0 do if up([x]s+ ) = [x]PI then return False foreach a ∈ AD such that ∃x , − =, k ∈ eff(a).x = abs(x) ∧ k > 0 do if lo([x]s+ ) = [x]PI then return False foreach a ∈ AD such that ∃x , =, k ∈ eff(a).x = abs(x) ∧ k = [x]PI do if k ∈ / [x]s+ then return False return True domain. For any fluent x, [x]s+ is used to denote the interval of values for x in s+ . lo([x]s+ ) and up([x]s+ ) denote the minimum and maximum value, respectively. Algorithm 2 works as follows. First, it grounds the planning problem, obtaining Pg ; AIBR indeed is defined for fully grounded problems only. Then it calls the AIBR specified by Algorithm 1. This algorithm returns the fix point AIBR planning state. Then, we iterate over all the variables that are expressed in the initial state of P . This set is denoted by XP . For each action that abstractly modifies the variable under iteration, we distinguish the three possible effects of an action on the variable: an increase, a decrease and an assignment. If the action abstractly increases (decreases) the value of a numeric fluent x, then we check whether the interval for x at the fix point s+ has increased (decreased) the variable. This is done by inspecting the lower and the upper bound of the interval (function lo and up in the code), and determining whether the fix-point Domain Dynamic Consistency for Numeric Planning Problems value admits an increase, a decrease, or the foreseen assignment; for the assignment it suffices to check whether the interval at fix point does not include the value k. For instance, if we have a variable x with an initial value of 0, an effect inx, x , 5 where x is the abstracted version of x, a fixpoint [x]s+ = [− inf, 0] will imply that x is never going to be increased, even if it was supposed to do so at an abstract level. If at least one of these cases is not satisfied, the algorithm returns that the problem is not DDC. Otherwise it carries on and explores the next variable from Xp . Algorithm 2 correctly identifies whether a problem is not DDC and can thus be used to signal suspicious situations. Proposition 2. If Algorithm 2 returns False for a problem P, then P is not DDC. Proof (Sketch). Observe that the algorithm terminates with True only for those cases where the relaxation proves that one variable violates Definition 1. AIBR overestimates all values that can ever be obtained. If some value is not reached under AIBR, it is not reachable in real semantics either. The Case of Urban Traffic Control Urban traffic control (UTC) aims at optimising traffic flows in urban areas by reducing travel time delays and avoiding congestion of road links. One possibility, which is usually considered by traffic authorities, is configuring traffic lights on the intersections [17,29]. A traffic signal configuration of an intersection is defined by a sequence of green light phases, each with its specified duration, that, in consequence, affects the traffic movement through the intersection. Traffic movements are described in terms of Passenger Car Units (PCUs) that on average can move from incoming to outgoing links of the intersection. Traffic signal configurations operate in cycles, i.e. the sequences of green phases they define are being repeated (until the configuration changes). When specifying a configuration, we need to keep in mind any rules governing minimum and maximum green phase length. In addition, we also need to respect the constraints on minimum and maximum duration of entire cycles as well. Intergreens typically have specified durations which we are not allowed to change. This section shows an UTC instance where the notion of DDC can be used to capture when the PDDL encoding of the UTC is faulty because of some erroneous input in defining the problem. A UTC problem includes the definition of two actions modelling extension and reduction of the length of the default green time for a stage s in a junction j. The PDDL abstract model for such two actions is reported in Fig. 1. To change the default green time for a phase, several conditions have to be satisfied; focusing on numeric conditions, time needs to be less than the maximum green time or higher than the minimum green time. It is important, therefore, that both the minimum green time and the maximum green time E. Scala et al. Fig. 1. Snippet of PDDL UTC model. All blocks find a direct correspondence to the more mathematical formalisation provided in Sect. 2. properly set in order to give room to the planner to modify the value of the default green time if necessary. Figure 2 shows an excerpt of a problem specification. Notably, UTC problem specifications include knowledge pulled from a range of different data sources, that may therefore be inconsistent or noisy and need to be carefully verified [4]. Further, the models are large, composed by thousands of lines, making manual verification unfeasible. Run over the problem of Fig. 2, Algorithm 2 yields a fix-point interval state where (defaultgreentime wrac1_stage1) is any value between −∞ and ∞. Instead, in the considered excerpt, the value of (defaultgreentime wrac1_ stage2) will never change through time. Indeed, neither reduceStage nor extendStage can be applied. The default green time is not within the minimum and maximum green time. Although this is not a problem modelling wise, the notion of DDC detects this as a suspicious situation. The abstract version of default green time is non static due to the actions of Fig. 1. Yet, there is a concrete specialisation, (defaultgreentime wrac1_stage2) that is static, and this makes the problem to be non consistent w.r.t. the domain. Because such a problem is deemed as non Domain Dynamic Consistent, the user can be alarmed and fix the problem accordingly, i.e., modifying the minimum green time variable for wrac1_stage2 to a consistent value. Using a prototype implementation of the presented algorithm on real-world data, we were able to quickly identify a dozen of issues and inconsistencies on automatically generated UTC initial states, effectively addressing the issues rais- Domain Dynamic Consistency for Numeric Planning Problems Fig. 2. Snippet of a UTC problem, presenting some elements of a single junction with two stages. In PDDL syntax, the block ‘’:init‘’ is the initial state; ‘’:objects‘’ define the universe of objects. ing from pulling data from different sources. The use of DDC also allowed to identify unforeseen failure points of the knowledge acquisition process. For instance, we identified a case where one junction went offline and did not communicate its status (missing defaultgreentime value). The use of automated planning in real-world applications, particularly when instances are generating by including data pulled together from a range of sources, comes with the challenge of verifying that the resulting instances are consistent. In this paper, to address the above-mentioned challenge, we introduced the notion of Domain Dynamic Consistency (DDC) to identify instances that may not behave as expected. The notion of DDC can be used as a means to verify the knowledge acquisition process of a planning problem initial state, and the fact that pulled data provide a consistent overall figure. The DDC notion has been captured in PDDL, a well known formalism used by the planning community. This notion can be useful in contexts where one wants to have an automatic mechanism to inspect suspicious input. The idea being that DDC does not necessarily identify mistakes, but can flag aspects that are suspicious and deserve in-depth investigation. We then presented a sound technique to prove when a problem is not DDC, that leverages on existing E. Scala et al. numeric relaxation-based heuristics. Finally, we provided an example application where the use of DDC helped in catching a number of issues in large PDDL models. We see several avenues for future work. First, we are interested in extending the DDC notion to more complex planning formalisms, for instance PDDL+ [10]. Second, we plan to develop a suitable interface to allow non-planning experts to take advantage of this technique. Finally, we are interested in exploiting the DDC notion also to suggest potential issues of the domain models, to provide a tool that can also help in revising and improving the planning models used. Acknowledgements. Mauro Vallati was supported by a UKRI Future Leaders Fellowship [grant number MR/T041196/1]. Enrico Scala has been partially supported by AIPlan4EU, a project funded by EU Horizon 2020 research and innovation programme under GA n. 101016442, and by the Italian MUR programme PRIN 2020, Prot.20203FFYLK (RIPER – Resilient AI-Based Self-Programming and Strategic Reasoning). References 1. Aldinger, J., Mattm¨ uller, R., G¨ obelbecker, M.: Complexity of interval relaxed numeric planning. In: H¨ olldobler, S., Kr¨ otzsch, M., Pe˜ naloza, R., Rudolph, S. (eds.) KI 2015. LNCS (LNAI), vol. 9324, pp. 19–31. Springer, Cham (2015). https://doi. org/10.1007/978-3-319-24489-1 2 2. Aldinger, J., Nebel, B.: Interval based relaxation heuristics for numeric planning with action costs. In: Kern-Isberner, G., F¨ urnkranz, J., Thimm, M. (eds.) KI 2017. LNCS (LNAI), vol. 10505, pp. 15–28. Springer, Cham (2017). https://doi.org/10. 1007/978-3-319-67190-1 2 3. Bensalem, S., Havelund, K., Orlandini, A.: Verification and validation meet planning and scheduling. Int. J. Softw. Tools Technol. Transf. 16(1), 1–12 (2014) 4. Bhatnagar, S., Mund, S., Scala, E., McCabe, K., McCluskey, L., Vallati, M.: On the challenges of on-the-fly knowledge acquisition for automated planning applications. In: 14th International Conference on Agents and Artificial Intelligence (2022) 5. Bonet, B., Geffner, H.: Planning as heuristic search. Artif. Intell. 129(1–2), 5–33 (2001) 6. De Giacomo, G., Vardi, M.: Synthesis for LTL and LDL on finite traces. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 1558–1564. AAAI Press (2015) 7. Edelkamp, S., Kissmann, P.: Partial symbolic pattern databases for optimal sequential planning. In: Dengel, A.R., Berns, K., Breuel, T.M., Bomarius, F., RothBerghofer, T.R. (eds.) KI 2008. LNCS (LNAI), vol. 5243, pp. 193–200. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85845-4 24 8. Fourati, F., Bhiri, M.T., Robbana, R.: Verification and validation of PDDL descriptions using Event-B formal method. In: Proceedings of the 5th International Conference on Multimedia Computing and Systems (ICMCS), pp. 770–776 (2016) 9. Fox, M., Long, D.: PDDL2.1: an extension to PDDL for expressing temporal planning domains. J. Artif. Intell. Res. 20, 61–124 (2003) 10. Fox, M., Long, D.: Modelling mixed discrete-continuous domains for planning. CoRR abs/1110.2200 (2011) Domain Dynamic Consistency for Numeric Planning Problems 11. Fox, M., Long, D., Tamboise, G., Isangulov, R.: Creating and executing a well construction/operation plan. uS Patent App. 15/541,381 (2018) 12. Ghallab, M., Nau, D.S., Traverso, P.: Automated Planning and Acting. Cambridge University Press, Cambridge (2016) 13. Helmert, M.: Decidability and undecidability results for planning with numerical state variables. In: Proceedings of the Sixth International Conference on Artificial Intelligence Planning Systems (AIPS), pp. 44–53. AAAI (2002) 14. Helmert, M.: Concise finite-domain representations for PDDL planning tasks. Artif. Intell. 173 (5–6), 503–535 (2009) 15. Hoffmann, J.: The Metric-FF planning system: translating “ignoring delete lists” to numeric state variables. J. Artif. Intell. Res. 20, 291–341 (2003) 16. Lipovetzky, N., Burt, C.N., Pearce, A.R., Stuckey, P.J.: Planning for mining operations with time and resource constraints. In: Proceedings of the International Conference on Automated Planning and Scheduling (2014) 17. McCluskey, T.L., Vallati, M., Franco, S.: Automated planning for urban traffic management. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 5238–5240 (2017) 18. McCluskey, T.L., Vaquero, T.S., Vallati, M.: Engineering knowledge for automated planning: towards a notion of quality. In: Proceedings of the Knowledge Capture Conference, K-CAP, pp. 14:1–14:8 (2017) 19. Moore, R.E., Kearfott, R.B., Cloud, M.J.: Introduction to Interval Analysis. SIAM (2009) 20. Parkinson, S., Longstaff, A., Fletcher, S.: Automated planning to minimise uncertainty of machine tool calibration. Eng. Appl. Artif. Intell. 30, 63–72 (2014) 21. Pnueli, A.: The temporal semantics of concurrent programs. In: Proceedings of Semantics of Concurrent Computation, pp. 1–20 (1979) 22. Raimondi, F., Pecheur, C., Brat, G.: PDVer, a tool to verify PDDL planning domains. In: Proceedings of Workshop on Verification and Validation of Planning and Scheduling Systems, ICAPS (2009) 23. Scala, E., Haslum, P., Thi´ebaux, S.: Heuristics for numeric planning via subgoaling. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 3228–3234. IJCAI/AAAI Press (2016) 24. Scala, E., Haslum, P., Thi´ebaux, S., Ram´ırez, M.: Interval-based relaxation for general numeric planning. In: Proceedings of the 22nd European Conference on Artificial Intelligence (ECAI), pp. 655–663 (2016) 25. Scala, E., Haslum, P., Thi´ebaux, S., Ram´ırez, M.: Subgoaling techniques for satisficing and optimal numeric planning. J. Artif. Intell. Res. 68, 691–752 (2020) 26. Scala, E., Vallati, M.: Exploiting classical planning grounding in hybrid PDDL+ planning engines. In: Proceedings of the 32nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI), pp. 85–92 (2020) 27. Shrinah, A., Eder, K.: Goal-constrained planning domain model verification of safety properties. In: STAIRS@ECAI (2020) 28. Thi´ebaux, S., Coffrin, C., Hijazi, H., Slaney, J.: Planning with MIP for supply restoration in power distribution systems. In: Proceedings of the International Joint Conference on Artificial Intelligence (2013) 29. Vallati, M., Magazzeni, D., Schutter, B.D., Chrpa, L., McCluskey, T.L.: Efficient macroscopic urban traffic models for reducing congestion: a PDDL+ planning approach. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 3188–3194 (2016) Comparing Multi-Agent Path Finding Algorithms in a Real Industrial Scenario Enrico Saccon(B) , Luigi Palopoli , and Marco Roveri University of Trento, Trento, Italy Abstract. There is an increasing trend for automating warehouses and factories leveraging on teams of autonomous agents. The orchestration problem for a fleet of autonomous robotic cooperating agents has been tackled in the literature as Multi-Agent Path Finding (MAPF), for which several algorithms have been proposed. However, these algorithms have been only applied to synthetic randomly generated scenarios. The application in real scenarios demands scalability (being able to deal with realistic size warehouses) and efficiency (being able to quickly adapt to changes in the problems, e.g., new orders or change in their priorities). In this work we perform an analysis of the MAPF literature, we selected the most effective algorithms, we implemented them and we carried out an experimental analysis on a real scalable warehouse of a large distribution company to evaluate their applicability in such scenarios. The results show that a) no algorithm prevails on the others; b) there are difficult (realistic) cases out of the scope of all the algorithms. Robots are becoming a familiar presence in the daily life of people, helping them in different application domains: industry, warehouse, healthcare, search and rescue, and office automation. Despite this, industry is the domain in which automated machines have had the most successful applications. Indeed, the 4.0 industrial revolution meant for many workers an increased level of interaction with the machines present in the factory [4], with a significant impact on productivity [29]. Indeed, robotics proves to enhance and solve more easily logistics and manufacturing problems allowing for a better use of the industrial space [12]. Since the last decade, robots have been used with great profit in the healthcare sector. For example, they have been successfully used in precise surgical procedures to help surgeons reach difficult anatomical compartments and doing operations that would otherwise be impossible [5]. Also, robotics has been applied to help elderly and impaired people move more freely, besides being used to assist during rehabilitation [10]. Robots have been also successfully utilized is M. Roveri—The work of M. Roveri was partially funded by the Italian MUR programme PRIN 2020, Prot.20203FFYLK (RIPER – Resilient AI-Based Self-Programming and Strategic Reasoning). c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 184–197, 2023. https://doi.org/10.1007/978-3-031-27181-6_13 Comparing MAPF Algorithms in a Real Industrial Scenario search and rescue missions in challenging environments [1]. Finally, robots can be used to help in the day-to-day life of an office allowing affairs to be sped up and simplifying the general workday [27]. The majority of the above applications, involve multiple robots that need to cooperate while moving in a shared environment (e.g. a warehouse) without interfering with each other in order to complete one or multiple tasks in the most efficient way possible, and requiring prompt response to contingencies (e.g., arrival of a new task). This can be achieved by an automatic synthesis of a plan (i.e., a sequence of movements for each agent) to fulfill the full set of tasks. The automatic synthesis to be applied in real industrial scenarios requires that i) the solution plan will be generated for real-size industrial scenarios; ii) the solution plan will be generated quickly (e.g., in at most 1 minute) to quickly adapt to contingencies (E.g., new order, change of priority, order cancellation). The problem of finding a plan for coordinating a fleet of autonomous robotic cooperating agents aiming to complete assigned tasks has been tackled in the literature as Multi-Agent Path Finding (MAPF). Several algorithms have been proposed to solve the MAPF problem like e.g., the Kornhauser’s algorithm [13], the Extended A* [11], the Increasing Cost Tree Search (ICTS) [22], several variants of the Constraint Based Search (CBS) [21], and the Constraint Programming (CP) and Mixed Integer Linear Programming (MILP) approaches. However, as far as our knowledge is concerned, these algorithms have been only applied to synthetic randomly generated graphs, and their application in real scenarios has not been studied. In this work we make the following contributions. First, we perform a detailed analysis of the MAPF literature, from which we selected the most effective algorithms, and we implemented them as efficiently as possible. Second, we carry out an experimental analysis on a real warehouse of a large distribution company. To evaluate the performance and applicability of the considered algorithms we decomposed the whole warehouse in sub-areas of increasing size (from a smaller area to the whole warehouse). For each scenario we considered different number of agents and several randomly generated tasks for each agent. The results show that the CP approach is able to solve very small cases and does not scale to large real-size scenarios, although it generates optimal solutions and is able to solve also small critical hard problems. The algorithms that performs better are the two variants of the CBS, although none of them is able to solve many cases. This work also contributed to identify some situations for which none of the considered algorithms is able to find a solution in a set amount of time. These results contribute to pave the way for investigating new heuristics to solve these hard problems that appear in real scenarios. This paper is organized as follows. In Sect. 2, we revise the literature about MAPF. In Sect. 3, we formally define the problem we aim to, and we provide an high-level description of the most relevant approaches studied in the literature. In Sect. 4, we describe the most relevant implementation details, the considered warehouse, and we critically discuss the results. Finally, in Sect. 5, we draw the conclusions and we outline future works. E. Saccon et al. Related Works In this work, we focus on the aspect of motion planning considering the equally important problem of mission planning as completed before starting the motion planning task. While the former focuses on the best path to follow starting from a position, executing the intermediate objectives and reaching the final destination [14], the latter focuses on the best way of organizing the goals for each robots in the environment [6]. The reason why mission planning is not considered is due to the fact that usually warehouses use specialized software to handle their internal structures, and such software is usually responsible for the generation of an ordered set of goals. The aspect of motion planning is particularly important in a populated environment because it needs to guarantee people safety. 2.1 Single-Agent Path Finding The Single-Agent Path Finding (SAPF) problem is the problem of finding the best path on a graph between two given nodes or vertexes. Such problem is of great importance in various scenarios. Indeed, one of the main algorithms used to solve the SAPF problem, A*, has been successfully applied to GPS localization in order to improve the way-points accuracy for remote controlled agents [15]. Nevertheless, the field in which single-agent path finding has found the most importance is the field of robot routing and planning, as the problem name also suggests. SAPF algorithms have been successfully implemented in robot routing, where they have been used to search a graph constructed by environmental data in order to avoid obstacles and to explore possible routes [2]. This thesis focuses on the path planning problem that can be defined as follows: Definition 1 (Single-Agent Path Finding). Given an undirected graph G = (V, E), where V is the set of the vertexes (that correspond to possible locations for the agent) and E the set of edges joining two vertexes (representing the possible transitions between two locations), the Single-Agent Path Finding (SAPF) problem consists in finding the shortest feasible plan π between a starting vertex vS ∈ V and a final one vF ∈ V . A plan π is the sequence of N actions αi , i ∈ {1, . . . N } that take the agent from the starting position vS ∈ V to the final position vF in N steps by following the graph edges: π = [α1 , ..., aN ] : π(vS ) = αN (...α2 (α1 (vS ))...) = vF where with αi (vs ) we denote the movement to the vertex ve ∈ V from vs ∈ V , such that vs , ve ∈ E. We denote with π[h], h ≤ N the h-th action of the plan π = [α1 , ..., αN ], i.e. π[h] = αh . We also denote with |π| = N the length of the plan π = [α1 , ..., αN ]. Due to its definition, the SAPF problem can be reduced to the problem of finding the shortest path on a graph. What follows is a brief description of the main algorithms that can be applied to single-agent path finding which can be divided in deterministic algorithms (e.g. Dijkstra’s) and heuristic ones (e.g. A* ). Comparing MAPF Algorithms in a Real Industrial Scenario Dijkstra’s Algorithm. Dijkstra’s algorithm [9] aims to find the shortest path between two nodes on a graph whose edges have only positive values. Note that the graph needs to be strongly connected, i.e., there must be at least one path between any two nodes. While this seems quite a strong limitation, industrial scenarios usually provide such graph: no node can be a sink since it must be possible for an agent to come back from each location, that is, usually graphs modeled on warehouses are either undirected, and hence strongly connected, or directed but no node can be a sink. The work of Dijkstra published in 1959 [9] presents two possible algorithms, one to find the shortest path from one node to another and one to find a tree of minimum length starting from a node and reaching all the other nodes. We focus on the second aspect. The complexity of the algorithm depends on the number of vertexes and edges. Moreover, different and improved versions of the algorithm have different worst-case performance, but the initial one proposed by Dijkstra runs in time O((|V | + |E|) log |V |). Finally, the algorithm has been successfully used in robot path planning [7,16,28]. A* Algorithm. A* is an heuristic best-first search algorithm for finding the shortest path on a graph [25]. It is also an admissible algorithm, that is, it is guaranteed to find an optimal from the starting node to the arrival one [11]. The idea of A* is to direct the search over the nodes towards the arrival node without having to necessarily examine all the vertexes. To do so, A* keeps a set of nodes to be visited, which is initialized with only the starting node, but then it is enlarged with the neighbors that the algorithm deems worthy to be expanded. A node is said to be expanded when it is added to the set to be analyzed later on. The choice of which nodes should be expanded and which not, is given by the heuristic function. Indeed, when examining the neighbors u ∈ neigh(n) of the considered node, A* uses a heuristic h(u) to estimate the distance to the arrival vertex. Let h ∗ (u) be the perfect heuristic, that is, a function that returns the correct distance from the node u to the arrival vertex, then if h ∗ (u) is known for all the nodes, the best path is obtained just by choosing to go to the neighbor with the lower heuristic distance between neighbors. It has been proved that if h(n) ≤ h ∗ (n), then the heuristic is admissible and A* is optimal [11]. Problem Statement The Single-Agent Path Finding (SAPF) problem is the problem of planning feasible movements for multiple agents [18] such that each one can reach its final location from a respective initial. Definition 2 (Multi-Agent Path Finding). Given a finite set A = {a1 , ..., ak } of k agents, given an undirected graph G = (V, E), where V is the set of the vertexes (that correspond to possible locations for the agents) and E the set of edges joining two vertexes (representing the possible transitions between two locations), given an initial start location vSai ∈ V and final location vFai for each agent ai , the Single-Agent Path Finding (SAPF) problem consists in finding E. Saccon et al. Fig. 1. The different kinds of conflicts. a joint feasible plan Π = {πa1 , ..., πak } such that for each πai , πai (vSai ) = vFai , and it minimizes a given cost function C(Π). In this work, we focus on edges with unitary cost (i.e. all edges have cost 1, whereas extensions for which edges have non-unitary costs will be left for future work). We say that a joint plan Π is feasible if no conflict happens between any two different agents. In the literature, the most widely used notions of conflicts are the following [18]: – Vertex conflict: when two agents ai , aj ∈ A with i = j are not occupying the same vertex at the same time. We say that the two agents have a vertex a conflict iff ∃1 ≤ h ≤ N such that πai [h](vSai ) = πaj [h](vSj ). – Edge conflict: when two agents ai , aj ∈ A with i = j are aiming to use the same edge on the same direction at the same time. We say that the two agents have an edge conflict iff ∃1 ≤ h < N such that πai [h](vSai ) = a a πaj [h](vSj ) ∧ πai [h + 1](vSai ) = πaj [h + 1](vSj ). – Swap conflict: when two agents ai , aj ∈ A with i = j are aiming to use the same edge but on opposite direction at the same time. We say that the two agents have a swap conflict iff ∃1 ≤ h < N such that πai [h](vSai ) = a a πaj [h + 1](vSj ) ∧ πai [h + 1](vSai ) = πaj [h](vSj ). – Follow conflict: when two agents ai , aj ∈ A with i = j are such that agent ai want to occupy a position at a given time h that was occupied by agent aj at time h − 1. We say that the two agents have a follow conflict iff ∃1 < h ≤ N a such that πai [h](vSai ) = πaj [h − 1](vSj ). In Fig. 1, we provide a pictorial representation for the vertex, swap, and follow conflicts. The edge conflict is pictorially similar to the swap conflict where the two agents are in the same location and want to take the same edge. It should be noted that avoiding vertex conflicts will avoid edge conflicts by definition. In the literature, two different kinds of cost function C(Π) have been considered: the makespan and the sum of costs (we refer to [18] for a more thorough discussion). – The makespan is a function that returns the length of the longest plan πaj ∈ Π: I.e. C(Π) = MKS(Π) = max |πai |. Thus, minimizing the makespan πai ∈Π Comparing MAPF Algorithms in a Real Industrial Scenario means finding the plan that contains the shortest path among the possible longest paths. – The sum of costs is a function that returns the sum of the individual cost of the different plan πaj ∈ Π: I.e. C(Π) = SIC(Π) = |πai |. Here we πai ∈Π assume that each action costs 1. If a cost ce for ei ∈ E is associated to each edge, then instead of the length of the plan, one has to consider the sum of the cost of each action in the plan. The classical multi-agent path finding problem has been proved to be NPhard, i.e., it is not possible to find an optimal solution in polynomial time [17, 26,30]. Notice that the problem is NP-hard when finding an optimal solution, i.e., a solution that minimize the objective function, may it be the makespan or the sum of individual costs. 3.1 In the literature, several algorithms to solve the MAPF problem have been proposed. These algorithms can be correct and complete (i.e. if they terminate with a solution, then the computed solution is a solution to the given MAPF problem, and if the problem admits no solution the algorithm says that no solution exists); and can compute an optimal solution if it minimizes the given cost function, or a bounded optimal one if the computed solution minimizes the cost function within a given bound (i.e. there is some degree of freedom), or a non optimal one if there is no guarantee of optimality for the computed solution. In the following description, we consider the these approaches with the corresponding algorithms: the Kornhauser’s algorithm [13], the Extended A* [24], the Increasing Cost Tree Search (ICTS) [22], the Constraint Based Search (CBS) [21], and the Constraint Programming (CP) and Mixed Integer Linear Programming (MILP) approaches. The Kornhauser’s algorithm [13] is a complete but not optimal algorithm that 3 solves the MAPF problem in O(|V | ). This algorithm considers all the agents in their position, and it tries to move one single agent to a neighbor free location one at a time with the aim to find a way to move all the agents from one arrangement to another. The solution is obtained by decomposing the problem in sub-problems each one composed by the agents that can reach the same set of nodes and the sub-graph made of these nodes [19]. This algorithm has been considered very hard to be implemented efficiently [25]. The extended A* algorithm considers moving all possible agents from one location to a free neighbor one at the same time. This results in a search space of k k |E| , which are both exponential in the number |V | and a branching factor of |V | of agents and hence intractable [25]. Two extensions were proposed to solve the MAPF problem [24]: Operator Decomposition (OD) and Independence Detection (ID). The first aims at reducing the exponential branching factor while the E. Saccon et al. other tries to decouple the problem of k agents to smaller problems with less agents. The two extensions can also be combined. This algorithm is correct, complete and optimal. The Increasing Cost Tree Search (ICTS) algorithm is a two-stage search in which a high-level search aims at finding the lengths of the paths for the different agents, while the low-level search carries out the creation of the path for the various agents with the cost constraints given by the high-level search [22,25]. This algorithm creates a tree called Increasing Cost Tree (ICT) in which each node contains a vector of the costs Ci of the individual path of each agent ai . The total cost of the node C is given by the result of the objective function applied to the joint plan and all the nodes at the same level in the tree have the same total cost. The root of the tree is initialized with the costs of the individual paths of the agents as if they were considered in a SAPF problem. If there are no conflicts, then the solution is fine as it is and the algorithm stops. If instead a conflict was found, then k new nodes are going to be created, one for each agent: the i-th node is composed of the solution of the parent and by only increasing the cost solution for the i-th agent by one unit than before. The idea is the following: if with a given solution it was not possible to find a solution without conflicts, then it may be possible to find a solution by increasing the path of an agent by one. The algorithm continues until a solution is found. The ICT nodes not containing conflicts are called goal nodes. The low-level search is instead the part of the algorithm that has to find a path for the i-th agent of cost Ci and such that it reaches its final destination. There may be different implementations for this part of the algorithm: the most trivial would be to start from the initial node and enumerate all the possible path of length Ci and check which are reaching the final node. This though may become very expensive as the number of possible paths of cost Ci may be exponential. The solution proposed [22] uses an Multi-value Decision Diagram (MDD) [23] which are a generalization of the binary decision diagrams in the sense that they allow for more than two choices for every node. Basically, the MDD has a single source node which corresponds to the starting node of the agent. Then, it keeps track of all the neighbors of the source node adding them only if the path going through them can lead to the final node with cost Ci . This implies also that the MDD has a single sink and that it is the final goal of the agent. The problem is then how to choose which path is best to return to the high-level search since a path may produce more conflicts than another leading to a bigger and sub-optimal ICT. This is done by doing the cross-product, i.e., merging, the different MDDs and removing those branches that contains conflicts. We remark that, given the structures of the ICT and of the cross-product of the MDDs, the optimization problem can be reduced to a satisfaction problem: the first ICT node that satisfy the constraint of not having any conflict is also going to be optimal, and the same is true for the paths found in the combination of the MDDs. The Constraint Based Search (CBS) algorithm uses two distinct search processes similarly to ICTS, a high-level and a low-level one, and a tree to solve the MAPF Comparing MAPF Algorithms in a Real Industrial Scenario problem. Differently from ICTS, the CBS algorithm builds a Constraint Tree (CT) composed of nodes tracking three elements: i) the joint plan; ii) the cost of the joint plan; iii) a set of constraints associated with the joint plan. The idea is that whenever a joint plan contains a conflict, it is resolved by creating two new nodes with different constraints, which are limitations of an agent movement. In particular, the original CBS [21] defines constraint as a negative restriction tuple (ai , n, t), meaning that the agent ai is not allowed to be on node n at time t. The protocol works in the following way: the root is built by considering the paths of the agents as in a single-agent path finding (SAPF) problem. Then, the high-level search checks for possible conflicts. Let πi and πj be the plans for agents ai and aj respectively, and suppose that they have a vertex conflict at time t on node n. Then, the high-level search creates two new CT nodes from the parent, one in which agent ai cannot be on node n at time t, and the other CT node in which agent aj cannot be on node n at time t. An improvement to CBS [3] suggests that using two positive constraints and a negative one may produce better results since the set of paths that complies with the constraints is disjoint [25]. This means that, instead of having two children from a node, the high-level search creates three children, one in which agent ai must be on node n at time t, one in which agent aj must be on node n at time t and one in which neither of them is allowed to be on node n at time t. The process of expanding nodes, i.e., creating new children, stops when there are no more conflicts to be solved. Whenever a new node is added, the low-level search is called to find a solution to the problem with the new added constraints. If a feasible solution can be found, then the node is added to the set of nodes to be further explored. To pick the next node to examine, CBS uses the cost function of the joint plan. Finally, as it regards the low-level search, it can be any SAPF algorithm, although it needs to be properly modified to support the presence of constraints. The Constraint Programming (CP) approach leverages a mathematical modeling paradigm in which the problem is encoded as a set of constraints among two kind of variables: state variable and decision variables. This approach is usually divided in two parts: a modeling part that addresses the shaping of the aspects of the problem introducing variables over specific domains and constraints over such variables; a solving part that aims at choosing the value of the decision variables that minimize a given cost function and that make the constraints satisfiable. If the constraints are well-formed, i.e., they correctly cover the variables and their domains, than constraint programming is both optimal and correct. Typical modeling considers Boolean variables for each agent for each vertex for each time point, and a constraint that enforces that each agent can occupy only one vertex in each time step (thus ensuring no vertex conflict). Agents are positioned on their initial position at the first time step, and must be on their arrival position at the last time step. Agents move along edges towards neighbors of the node on which they are: this is to ensure the validity of the solution since an agent cannot jump from one node to another. Once the constraints are fixed, the model can be solved with any off-the-shelf constraint solver, which tries to look at all the possible combinations without infringing any constraint. E. Saccon et al. Experimental Evaluation In this section, we first provide the high level details of the implementation of the considered algorithms, and the information about the software and hardware infrastructure used for the experiments. Then we describe the considered industrial scenarios, we report and critically discuss the results of the experiments. 4.1 For the implementation we have considered only three of the approaches discussed in Sect. 3. We implemented the CP approach and two variants of the CBS family of algorithms, in particular the Spanning Tree (ST) and the TimeDependant Shortest Path (TDSP). The CBS ST and the CBS TDSP differs in the low-level search used to build the constraint tree. The CBS ST in the local search builds a spanning tree as to allow the high-level search to choose among the possible different paths that have the same length. The CBS TDSP in the local search uses a variant of the Dijkstra [9] algorithm to compute shortest paths where the costs of the edges depends on the time the edge is considered. We do not report here the pseudo-code of the considered algorithms for lack of space, and we refer to [20] for further details. We decided not to implement the Kornhauser’s algorithm since this algorithm has been considered very hard to be implemented efficiently from the research community [25], and it produces nonoptimal solutions. We did not implement the extended A* algorithm because of its large branching factor that will make it not applicable in large industrial scenarios. Finally, we also did not implement the ICTS approach since it requires to know possible bounds for the costs of the searched solutions a priori (an impractical information to get for realistic scenarios). All the algorithms have been implemented in C++ using the standard template libraries. For the CP algorithm we have leveraged the latest release of the C++ API of the CPLEX commercial constraint solver [8]. The source code with the implementation of all the algorithms is available at our open repository1 . We run all the experiments on an AMD Ryzen 3700X equipped with an 8 core CPU at 3.6 GHz base clock, and 32 GB of RAM running Linux. We considered as run-time timeouts 1 s, 10 s, and 60 s to mimic the time response expectations requested in industrial realistic scenarios. 4.2 Industrial Scenarios For the experiments we considered a real warehouse taken from a collaboration with a company operating in the field of robotic assisted warehouses. The entire warehouse and its graph representation is depicted in Fig. 2. The topological graph obtained from the map consists of 414 nodes with undirected edges. For the experiments we decomposed the warehouse into sub-problems as follows: i) WH1 that corresponds to the gold rectangle in the top right corner of Fig. 2; ii) 1 Comparing MAPF Algorithms in a Real Industrial Scenario Fig. 2. The schema of the real warehouse considered for the experiments. WH2 that corresponds to the blue rectangle in the bottom left corner of Fig. 2; iii) WH2 1 that corresponds to the red rectangle in the bottom left corner of Fig. 2; iv) WH2 2 that corresponds to the green rectangle in the bottom left corner of Fig. 2; v) WH2 1 1 that corresponds to the top 4 rows of red rectangle in the bottom left corner of Fig. 2; vi) WH2 1 2 that corresponds to the bottom 4 rows of red rectangle in the bottom left corner of Fig. 2; vii) WH2 2 1 that corresponds to the top 4 rows of green rectangle in the bottom left corner of Fig. 2; viii) WH2 2 2 that corresponds to the bottom 4 rows of green rectangle in the bottom left corner of Fig. 2. For each scenario, we considered problems with increasing number of robotic agents taken from {2, 5, 10, 20} and increasing number of goals taken from {1, 2, 5, 10, 20}. These numbers are the results of the discussion with the company owner of the reference warehouse we considered. The goals have been generated to resemble typical goals taken from the logistic activities carried out in the considered warehouse. In the results, we only report the name of the scenario followed by the number of problems considered in that scenario in parenthesis (E.g., WH2 2 2 (10) means the scenario WH2 2 2 with ten problems). For each experiment, we report the number of problems solved among the one considered, and the average search time in milliseconds (ms) required for the solved problems. We use TO to indicate that the algorithm was not able to find a solution within the given time budget for any of the problem in the scenario. E. Saccon et al. The results are reported in the Table 1: the upper left table reports the results for CBS with TDSP; the upper right table reports the results for CBS with ST; the lower down table reports the results for CP. For CP we also report the average memory in megabytes (MB) required to either find a solution or used before ending in timeout. Table 1. Results for CBS with TDSP (up left), CBS with ST (up right), CP (down). The results clearly show that none of the considered algorithm was able to solve all the problems in the considered budget constraints but only very few cases (e.g. in the WH2 2, WH2 2 2, WH1). In particular, the results show that the CBS algorithms are able to solve slightly more scenarios than CP (which solves only 3 cases in the 60s time boundaries with the best run-time completed in 1.1s). More specifically, the results show that the CBS algorithms are complementary. Indeed, for WH2 2 2 CBS TDSP is slower than the CBS ST, whereas for WH1 CBS TDSP is able to solve one instance while CBS ST none ending always in TO. CP is always worse in performance than the CBS algorithms. As the table with the results for CP reports, it is clear that this approach consumes a larger amount of memory w.r.t. the other approaches. Indeed, each time it does not finds a solution, it tries to increase the time steps by 1 unit thus resulting in a much larger complexity due to the used variables matrix Comparing MAPF Algorithms in a Real Industrial Scenario These results clearly show that although these algorithms have been thoroughly studied in the literature, and experimented on random graphs with random goals, when applied to realistic scenarios, they fail to find solutions in typical industrial budget of resources. A more thorough analysis of the cases where no solution was found (even with larger resource budgets) are cases where two robotic agents need to follow the same shortest path but in Fig. 3. A simple scenario opposite direction thus requiring to swap places in not solvable by CBS. one edge (see Fig. 3). In this cases, a simple strategy would move one of the two agents into a lateral position (if available) to allow the other to pass, and then go back to the previous location (thus taking a longer path that visit the same node more than once). The problem in solving such a situation stands in the difficulty to differentiate between a waiting action, which can be done on the node on which the agent currently is, or the action of exploring the neighbors of the node. Algorithms such as TDSP and ST are not meant to visit multiple times the same node. To solve this problem, both the high-level and low-level searches of CBS should be modified, the former to consider multiple possible nodes for a given time step h on the plan of an agent, and the latter to allow moving over the same node multiple times. Both changes are already planned for future works. In this paper, we studied the performance of the state-of-the-art MAPF algorithms on a set of scalable industrial scenarios all derived from a real warehouse of a large distribution company. The results show that the CP approach find optimal solutions, but it is applicable to only very small scenarios. The CBS approaches scale better and allows to solve in the given resource budgets more problems. However, these approaches fail to find a solution in cases where it was requested some agent to move to other locations and then go back to the same location to continue the motion to allow other agents to exit from conflicting cases. This particular case is really likely to happen by construction of the graph: the aisles are long and they can basically be occupied by just one agent at a time without having to solve many swap conflicts. The results show that there is not a clear winner, but all the approaches have pros and cons. This work paves the way for several future works that go from investigating new heuristics to solve hard problems that appear in real scenarios, to new algorithms that combine the pros of each approach, or that consider the use of divide-et-impera approaches to leverage different low-level search strategies. Moreover, we aim also to extend the work so that each agent does not only consider a set of tasks, but also other information such as batteries level and the possibility to recharge. Also, while in this work we have given the mission planning for granted, integrating mission planning in the MAPF problem E. Saccon et al. may lead to more effective ways of allocating tasks to the different agents to minimize the overall cost of the computed solution. The final goal is an open source framework containing different MAPF solvers that can be used to tackle the problem and that may be integrated in platforms such as ROS. For this same reason, the algorithms have been re-implemented instead of employing pre-existing code. Moreover, any existing code would have had to be adapted to our use-case, leading to a loss in performance. References 1. Arnold, R.D., Yamaguchi, H., Tanaka, T.: Search and rescue with autonomous flying robots through behavior-based cooperative intelligence. J. Int. Humanit. Action 3(1), 1–18 (2018). https://doi.org/10.1186/s41018-018-0045-4 2. Bhattacharya, S., Likhachev, M., Kumar, V.: Topological constraints in searchbased robot path planning. Auton. Robots 33, 273–290 (2012). https://doi.org/ 10.1007/s10514-012-9304-1 3. Boyarski, E., et al.: ICBS: the improved conflict-based search algorithm for multiagent pathfinding (2015) ˜ ga, S., Costa, E., Castellucci, I., Arezes, P.M.: A brief overview of the use 4. BraganA˘ of collaborative robots in industry 4.0: human role and safety (2019). https://doi. org/10.1007/978-3-030-14730-3 68 5. Brett, P., Taylor, R., Proops, D., Coulson, C., Reid, A., Griffiths, M.: A surgical robot for cochleostomy, pp. 1229–1232. IEEE (2007). https://doi.org/10.1109/ IEMBS.2007.4352519 6. Brumitt, B., Stentz, A.: Dynamic mission planning for multiple mobile robots, pp. 2396–2401. IEEE (1996). https://doi.org/10.1109/ROBOT.1996.506522 7. Chen, Y.Z., Shen, S.F., Chen, T., Yang, R.: Path optimization study for vehicles evacuation based on Dijkstra algorithm. Procedia Eng. 71, 159–165 (2014). https://doi.org/10.1016/j.proeng.2014.04.023 8. Corportation, I.: Ibm ilog cplex optimization studio 9. Dijkstra, E.W.: A note on two problems in connexion with graphs. Numer. Math. 1, 269–271 (1959). https://doi.org/10.1007/BF01386390 10. Ferrari, F., et al.: Human–robot interaction analysis for a smart walker for elderly: the ACANTO interactive guidance system. Int. J. Soc. Robot. 12(2), 479–492 (2019). https://doi.org/10.1007/s12369-019-00572-5 11. Hart, P.E., Nilsson, N.J., Raphael, B.: A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 4, 100–107 (1968). https://doi.org/10.1109/TSSC.1968.300136 12. Javaid, M., Haleem, A., Singh, R.P., Suman, R.: Substantial capabilities of robotics in enhancing industry 4.0 implementation. Cogn. Robot. 1, 58–75 (2021). https:// doi.org/10.1016/j.cogr.2021.06.001 13. Kornhauser, D., Miller, G., Spirakis, P.: Coordinating pebble motion on graphs, the diameter of permutation groups, and applications, pp. 241–250. IEEE (1984). https://doi.org/10.1109/SFCS.1984.715921 14. Latombe, J.C.: Robot Motion Planning, vol. 124. Springer Science & Business Media, Berlin, Heidelberg (2012). https://doi.org/10.1007/978-1-4615-4022-9 15. Pouke, M.: Using GPS data to control an agent in a realistic 3D environment, pp. 87–92. IEEE, September 2013. https://doi.org/10.1109/NGMAST.2013.24 Comparing MAPF Algorithms in a Real Industrial Scenario 16. Qing, G., Zheng, Z., Yue, X.: Path-planning of automated guided vehicle based on improved Dijkstra algorithm, pp. 7138–7143. IEEE, May 2017. https://doi.org/10. 1109/CCDC.2017.7978471 17. Ratner, D., Warmuth, M.K.: Finding a shortest solution for the n × n extension of the 15-puzzle is intractable (1986) 18. Roni, S., et al.: Multi-agent pathfinding: definitions, variants, and benchmarks. CoRR abs/1906.08291 (2019) 19. R¨ oger, G., Helmert, M.: Non-optimal multi-agent pathfinding is solved (since 1984) (2012) 20. Saccon, E.: Comparison of Multi-Agent Path Finding Algorithms in an Industrial Scenario. Master’s thesis, Department of Information Engineering and Computer Science - University of Trento, July 2022. https://www5.unitn.it/Biblioteca/en/ Web/RichiestaConsultazioneTesi 21. Sharon, G., Stern, R., Felner, A., Sturtevant, N.R.: Conflict-based search for optimal multi-agent pathfinding. Artif. Intell. 219, 40–66 (2015). https://doi.org/10. 1016/j.artint.2014.11.006 22. Sharon, G., Stern, R., Goldenberg, M., Felner, A.: The increasing cost tree search for optimal multi-agent pathfinding. Artif. Intell. 195, 470–495 (2013). https:// doi.org/10.1016/ j.artint.2012.11.006 23. Srinivasan, A., Ham, T., Malik, S., Brayton, R.: Algorithms for discrete function manipulation, pp. 92–95. IEEE Computer Society Press. https://doi.org/10.1109/ ICCAD.1990.129849 24. Standley, T.: Finding optimal solutions to cooperative pathfinding problems, vol. 24, pp. 173–178 (2010) 25. Stern, R.: Multi-agent path finding - an overview (2019). https:// doi.org/10.1007/ 978-3-030-33274-7 6 26. Surynek, P.: An optimization variant of multi-robot path planning is intractable, vol. 2, July 2010 27. Veloso, M.M., Biswas, J., Coltin, B., Rosenthal, S.: CoBots: robust symbiotic autonomous mobile service robots, pp. 4423–4429, July 2015 28. Wang, H., Yu, Y., Yuan, Q.: Application of Dijkstra algorithm in robot pathplanning, pp. 1067–1069. IEEE (2011). https://doi.org/10.1109/MACE.2011. 5987118 29. Wurman, P.R., D’Andrea, R., Mountz, M.: Coordinating hundreds of cooperative, autonomous vehicles in warehouses. AI Mag. 29, 9 (2008). https:// doi.org/10. 1609/aimag.v29i1.2082, https://ojs.aaai.org/index.php/aimagazine/article/view/ 2082 30. Yu, J., LaValle, S.M.: Structure and intractability of optimal multi-robot path planning on graphs, pp. 1443–1449. AAAI Press (2013) Logic-Based Ethical Planning Umberto Grandi1 , Emiliano Lorini1 , Timothy Parker1(B) , and Rachid Alami2 1 IRIT, CNRS, Toulouse University, Toulouse, France [emailprotected] 2 LAAS, CNRS, Toulouse, France Abstract. In this paper we propose a framework for ethical decisionmaking in the context of planning, with intended application to robotics. We put forward a compact but highly expressive language for ethical planning that combines linear temporal logic with lexicographic preference modelling. This original combination allows us to assess plans both with respect to an agent’s values and its desires, introducing the novel concept of the morality level of an agent and moving towards multi-goal, multi-value planning. We initiate the study of computational complexity of planning tasks in our setting, and we discuss potential applications to robotics. In ethical planning the planning agent has to find a plan for promoting a certain number of ethical values. The latter include both abstract values such as justice, fairness, reciprocity, equity, respect for human integrity and more concrete ones such as “greenhouse gas emissions are reduced”. Unlike classical planning in which the goal to be achieved is unique, in ethical planning the agent can have multiple and possibly conflicting values, that is, values that cannot be concomitantly satisfied. It is typical of ethical planning the problem of facing a moral struggle which is “...provoked by inconsistencies between value commitments and information concerning the kinds of decision problems which arise...” [18, p. 8]. Consequently, in ethical planning the agent needs to evaluate and compare the ideality (or goodness) of different plans depending on how many and which values are promoted by each of them. In this paper our intended application field is that of robotics. Including ethical considerations in robotics planning requires (at least) three steps. First, identify ethically sensitive situations in the robotics realm, and how are these situations represented. Planning seems to be the first candidate in which to include ethical considerations, thus we assume that values or ethical judgments are expressed about the results of plans. Second, design a language to express such values, bearing in mind that they can be, and often are, potentially conflicting in multiple ways: among values, between a value and a goal, or between a value and good practices. Such a value representation language needs to be compact and computationally tractable. Third, complete the picture of ethical planning by designing algorithms that compare plans based on the ethical values. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 198–211, 2023. https://doi.org/10.1007/ Logic-Based Ethical Planning In this paper we put forward a framework for ethical planning based on a simple temporal logic language to express both an agent’s values and goals. For ease of exposition we focus on single-agent planning with deterministic sequential actions in a known environment. Our model borrows from the existing literature on planning and combines it in an original way with research in compact representation languages for preferences. The latter is a widely studied topic in knowledge representation, where logical and graphical languages are proposed to represent compactly the preferences of an agent over a combinatorial space of alternatives, often described by means of variables. In particular, we commit to a prioritised or lexicographic approach to solve the possible arising inconsistencies among goals, desires, and good practices in a unified planning model. Related Work There is considerable research in the field of ethics and AI, see Müller [25] for a general overview. Popular ethical theories for application are consequentialism, deontology, and virtue ethics.1 Our approach should be able to work with any notion of “good actions” but is probably a most natural fit for pluralistic consequentialism [30]. While there is a lot of work at the theoretical/abstract level, there is comparatively less that examines how ethical reasoning in artificial agents could actually be done in practice. There are approaches both in terms of formal models [12] and allowing agents to learn ethical values [2]. Yu et al. [33] provides a recent survey of this research area. The closest approaches to ours are the recent work on (i) logics for ethical reasoning and (ii) the combination of a compact representation language, such as conditional preference networks, with decision-making in an ethically sensitive domain. The former are based on different methodologies including event calculus (ASP) [6], epistemic logic and preference logic [22,24], BDI (belief, desire, intention) agent language [11], classical higher-order logic (HOL) [5]. The latter was presented in “blue sky” papers [21,28] complemented with a technical study of distances between CP-nets [20] and, more recently, with an empirical study on human ethical decision-making [4]. CP-nets are a compact formalism to order states of the world described by variables. We take inspiration from these lines of work, but depart from them under two aspects. First, robotics applications are dynamic ones, and ethical principles must be expressed over time. Hence, unlike existing logics for ethical reasoning, our focus is on a specification language for values based on linear temporal logic. Second, ethical decision-making in robotic applications requires mixing potentially conflicting values with desires of the agent and to express the notion of plan, and CP-nets alone are not sufficient. In the field of robotics, there are approaches to enabling artificial agents to compute ethical plans. The evaluative component, which consists in assessing the “goodness” of an action or a plan in relation to the robot’s values, is made explicit 1 See Copp [9] for a philosophical introduction, and Jenkins et al. [15], Powers [27], and Vallor [31] for a discussion of these three theories in robotics. U. Grandi et al. by Arkin et al. [3] and Vanderelst and Winfield [32]. Evans et al. [13] focuses on a collision scenario involving an autonomous vehicle, proposing to prioritise the ethical claims depending on the situation, e.g. by giving more priorities to the claims of the more endangered agents. Related work explores the design of planning algorithms designed to help robots produce socially acceptable plans by assigning weights to social rules [1]. In preference-based planning by Bienvenu et al. [7] plans are compared relative to a single (possibly lexicographic) preference formula about temporal properties. Similarly, Lindner et al. [19] evaluate the permissibility of plans according to a specific ethical principle such as the deontological principle, the utilitarian principle, the do-no-harm or the double effect principle. In our approach plans are compared relative to sets of values. Comparison of alternatives (e.g., plans, states, histories) relative to a set of values is an essential aspect of ethics which is not considered in these two works. As we will show in Sect. 3.5, it opens up the possibility of formalizing the notion of moral conflict. In this section, we present the formal model of ethical evaluation and planning which consist, respectively, in comparing the goodness of plans and in finding the best plan relative to a given base of ethical values. 3.1 LTL Language Let Prop be a countable set of atomic propositions and let Act be a finite nonempty set of action names. Elements of Prop are noted p, q, . . ., while elements of Act are noted a, b, . . .. We assume the existence of a special action skip. The set of states is S = 2Prop with elements s, s , . . . In order to represent the agent’s values, we introduce the language of LTLf (Linear Temporal Logic over Finite Traces) [10,26], noted LLTLf (Prop) (or LLTLf ), defined by the following grammar: ϕ ::= p | ¬ϕ | ϕ1 ∧ ϕ2 | Xϕ | ϕ1 U ϕ2 , with p ranging over Prop. X and U are the operators “next” and “until” of LTLf . Operators “henceforth” (G) and “eventually” (F) are defined in the usual way: def def Gϕ = ¬( U ϕ) and Fϕ = ¬G¬ϕ. The propositional logic fragment of LLTLf is noted LPL and is defined in the usual way. We will use LPL to describe the effect preconditions of the agent’s actions. 3.2 The notion of history is needed for interpreting formulas in LLTLf . We define a k-history to be a pair H = (Hst , Hact ) with Hst : [0, k] −→ S and Hact : [1, k] −→ Act. Logic-Based Ethical Planning A history specifies the actual configuration of the environment at a certain time point and the action executed by the agent that leads to the next state. The set of k-histories is noted Hist k . The set of histories is Hist = k∈N Hist k . Semantic interpretation of formulas in LLTLf relative to a k-history H ∈ Hist and a time point t ∈ [0, k] goes as follows (we omit boolean cases which are defined as usual): H, t |= p ⇐⇒ p ∈ Hst (t), H, t |= Xϕ ⇐⇒ t < k and H, t + 1 |= ϕ, H, t |= ϕ1 U ϕ2 ⇐⇒ ∃t ≥ t : t ≤ k and H, t |= ϕ2 and ∀t ≥ t : if t < t then H, t |= ϕ1 . 3.3 Action Theory We suppose actions in Act are described by an action theory γ = (γ + , γ − ), where γ + and γ − are, respectively, the positive and negative effect precondition function γ + : Act × Prop −→ LPL and γ − : Act × Prop −→ LPL . The fact γ + (a, p) guarantees that proposition p will be true in the next state when action a is executed, while γ − (a, p) guarantees that proposition p will be false in the next state when action a is executed. We stipulate that if γ + (a, p) and γ − (a, p) are concomitantly true at a given state and action a is executed, then the truth value of p will not change in the next state. The latter captures an inertial principle for fluents. Definition 1 (Action-compatible histories). Let γ = (γ + , γ − ) be an action theory and let H = (Hst , Hact ) be a k-history. We say H is compatible with γ if the following condition holds, for every t ∈ [1, k] and for every a ∈ Act : if Hact (t) = a then Hst (t) = Hst (t − 1) \ p ∈ Prop : H, t − 1 |= ¬γ + (a, p) ∧ γ − (a, p) ∪ p ∈ Prop : H, t − 1 |= γ + (a, p) ∧ ¬γ − (a, p) . The set of γ-compatible histories is noted Hist (γ). 3.4 Let us now move from the notion of action to the notion of plan. Given k ∈ N, a k-plan is a function π : {1, . . . , k} −→ Act. The set of k-plans is noted Plan k . The set of plans is Plan = k∈N Plan k . The following definition introduces the notion of history generated by a k-plan π at an initial state s0 . It is the action-compatible k-history along which the agent executes the plan π starting at state s0 . U. Grandi et al. Definition 2 (History generated by a k-plan). Let γ = (γ + , γ − ) be an action theory, s0 ∈ S and π ∈ Plan k . Then, the history generated by plan π from state s0 in conformity with the action theory γ is the k-history H π,s0 ,γ = π,s0 ,γ π,s0 ,γ , Hact ) such that: (Hst (i) H π,s0 ,γ ∈ Hist (γ), π,s0 ,γ (ii) Hst (0) = s0 , π,s0 ,γ (iii) ∀k s.t. 1 ≤ k ≤ k : Hact (k ) = π(k ), Given a set of LTLf -formulas Σ, we define Sat(Σ,π,s0 ,γ) to be the set of formulas from Σ that are guaranteed to be true by the execution of plan π at state s0 under the action theory γ. That is, Sat (Σ,π,s0 ,γ) = ϕ ∈ Σ : H π,s0 ,γ , 0 |= ϕ . 3.5 Moral Conflicts An ethical planning agent is likely to have multiple values that it wishes to satisfy when making plans. Some of these values will be ethical in nature (“do not harm humans”), and some may not be (“do not leave doors open”). However, the more values the robot has the more likely it is to experience scenarios where it cannot satisfy all of its values with any given plan, and must violate some of them. In such a scenario, the agent must first work out which subsets of its value base are jointly satisfiable, and then which of those subsets it should choose to satisfy. To this end we define a notion of a moral conflict (note that in line with Levi [18] we refer to any conflict between an agent’s values as a “moral conflict” even if some or all of those values are not strictly moral/ethical in nature). Definition 3 (Moral problem). A moral problem is a tuple M = (Ω, γ, s0 ) where: – Ω ⊆ LLTLf is a set of values (which may or may not be strictly moral in nature); – γ = (γ + , γ − ) is an action theory and s0 is an initial state, as described above. Definition 4 (Moral conflict). A moral problem M = (Ω, γ, s0 ) is a moral conflict if: – ∀k ∈ N, there is no k-plan π such that Sat (Ω,π,s0 ,γ) = Ω. In other words, a moral conflict occurs when it is not possible to satisfy all of our values with any given plan. In some cases, a moral conlict may not depend on any particular feature of the start state, but may result simply from the value base and action theory, or even the value base alone. This allows us to define two further notions of moral problem. Logic-Based Ethical Planning Definition 5 (Physical moral problem). A physical moral problem is a pair (Ω, γ) where: – Ω ⊆ LLTLf is a set of values; – γ is an action theory. Definition 6 (Logical moral problem). A logical moral problem is a set of values Ω ⊆ LLTLf . We can also define moral conflict for these moral problems. A physical (logical) moral problem is a physical (logical) value conflict if for every possible start state s0 (and every possible action theory γ), the resultant moral value problem M = (Ω, γ, s0 ) is a moral conflict. By our definition, conflict mirrors the concept of necessity. Necessity would imply that every possible plan satisfies all the values in Ω, whereas conflict implies that no plan satisfies all values. Thus it is interesting to note that our definitions of conflict have mirrors in philosophical literature [16]. A physical moral conflict mirrors the notion of nomic necessity (necessary given the laws of nature) (at least from the perspective of the robot, for whom the action theory comprises the laws of nature) whereas a logical moral conflict mirrors the notion of logical necessity (necessary given the nature of logic). If an agent is experiencing a moral conflict, one response would be to “temporarily forget” values until it has a satisfiable set. Definition 7 (Contraction). If M = (Ω, γ, s0 ) is a moral problem and M = (Ω , γ, s0 ) is a moral problem, we say that M is a contraction of M if: – Ω ⊆ Ω – M is not a moral conflict. Note that if M = (Ω, γ, s0 ) is a moral problem, π is a plan, and Ω = Sat(Ω,π,s0 ,γ) then M = (Ω , γ, s0 ) must be a contraction of M . In this case, we refer to M as the contraction generated by π. This also illustrates that the current notion of contraction is unhelpful for an agent attempting to select a plan in a moral conflict, as all plans generate contractions. What would be helpful is some notion of a “minimal” or “ideal” contraction that sacrifices as few values as possible. Definition 8 (Minimal contraction). If M = (Ω, γ, s0 ) is a moral problem and M = (Ω , γ, s0 ) is a contraction of M , M is: – A qual-minimal contraction if there is no contraction M = (Ω , γ, s0 ) such that Ω ⊂ Ω ; – A quant-minimal contraction if there is no contraction M such that |Ω | < |Ω | Proposition 1. If M = (Ω, γ, s0 ) is a moral problem and is not a moral conflict, then the only qual-minimal and quant-minimal contraction of M is M U. Grandi et al. For either notion of minimality, we will have cases where there are multiple minimal contractions of a given moral conflict. This can produce unintuitive results, as if there is some moral conflict with Ω = {“do not kill humans”, “do not leave the door open”} with contractions {“do not kill humans”} and {“do not leave the door open”} then either notion of minimality will tell you that both contractions are ideal. On the other hand, it does seem that any stronger notion of minimality should at least respect qualitative minimality, since (intuitively), if plan π1 fulfills all of the values fulfilled by π2 , and fulfills more values, then π1 should be preferred to π2 . Proposition 2. Given a moral conflict M , a contraction M is quant-minimal only if it is qual-minimal. One way to resolve this is to recognise, in line with Levi [18], that some of our values are only used as tiebreakers to separate otherwise-equivalent plans, and should not be considered directly alongside our more important values. To model this, our values exist in lexicographically ordered sets, where each set is examined only if the sets above cannot deliver a verdict. 3.6 Lexicographic Value Base Together with an action theory and an initial state, an agent’s value base constitutes an ethical planning domain. Definition 9 (Ethical planning domain). An ethical planning domain is a tuple Δ = (γ, s0 , Ω) where: – γ = (γ + , γ − ) is an action theory and s0 is an initial state, as specified above; – Ω = (Ω1 , . . . , Ωm ) is the agent’s value base with Ωk ⊆ LLTLf for every 1 ≤ k ≤ m. Ω1 is the agent’s set of values with priority 1, Ω2 is the agent’s set of values with priority 2, and so on. For notational convenience, given a value base Ω = (Ω1 , . . . , Ωm ), we note dg(Ω) its degree (or arity). Agent’s values are used to compute the relative ideality of plans, namely, whether a plan π2 is at least as ideal as another plan π1 . Following [24], we call evaluation the operation of computing an ideality ordering over plans from a value base. Building on classical preference representation languages [17], we define the following qualitative criterion of evaluation, noted qual Δ , which compares two plans lexicographically on the basis of inclusion between sets of values. Definition 10 (Qualitative ordering of plans). Let Δ = (γ, s0 , Ω) be an ethical planning domain with Ω = (Ω1 , . . . , Ωm ) and π1 , π2 ∈ Plan . Then, π1 qual π2 if and only if: Δ (i) ∃1 ≤ k ≤ m s.t. Sat(Ωk ,π1 ,s0 ,γ) ⊂ Sat(Ωk ,π2 ,s0 ,γ), and ∀1 ≤ k < k, Sat(Ωk ,π1 ,s0 ,γ) = Sat(Ωk ,π2 ,s0 ,γ); or (ii) ∀1 ≤ k ≤ m, Sat(Ωk ,π1 ,s0 ,γ) = Sat(Ωk ,π2 ,s0 ,γ). Logic-Based Ethical Planning Note that a quantitative criterion could also be defined by counting the number of satisfied values in each level and, in line with the previous definition, compare these values lexicographically. , compares two plans lexicographiThe quantitative criterion, noted quant Δ cally on the basis of comparative cardinality between sets of values. Definition 11 (Quantitative ordering of plans). Let Δ = (γ, s0 , Ω) be an ethical planning domain with Ω = (Ω1 , . . . , Ωm ) and π1 , π2 ∈ Plan . Then, π2 if and only if: π1 quant Δ (i) ∃1 ≤ k ≤ m s.t. |Sat(Ωk ,π1 ,s0 ,γ)| < |Sat(Ωk ,π2 ,s0 ,γ)|, and ∀1 ≤ k < k, |Sat(Ωk ,π1 ,s0 ,γ)| = |Sat(Ωk ,π2 ,s0 ,γ)|; or (ii) ∀1 ≤ k ≤ m, |Sat(Ωk ,π1 ,s0 ,γ)| = |Sat(Ωk ,π2 ,s0 ,γ)|. This allows us to define another notion of minimal contraction for a moral conflict, namely a minimal contraction with respect to a lexicographic value base. Definition 12 (Lexicographic-minimal contraction). If M = (Ω, γ, s0 ) is a moral problem, and Ω = (Ω1 , ..., Ωm ) is a value base such that ∪Ω = Ω then M = (Ω , γ, s0 ) is a Ω-qual-minimal contraction of M if and only if: (i) Ω ⊆ Ω; (ii) M is not a moral conflict; (iii) If M = (Ω , γ, s0 ) is also a contraction of M, k : (a) 1 ≤ k ≤ m and Ω ∩ Ωk ⊂ Ω ∩ Ωk , and (b) ∀1 ≤ i < k, Ω ∩ Ωi = Ω ∩ Ωi . Note that by combining definitions 11 and 12 we can define a notion of Ωquant-minimal contraction. Proposition 3. Given a moral conflict M , a contraction M is Ω-qual-minimal or Ω-quant-minimal only if it is qual-minimal. 3.7 Adding Desires The behavior of autonomous ethical agents is driven not only by ethical values aimed at promoting the good for society but also by their endogenous motivations, also called desires or goals. Following existing theories of ethical preferences in philosophy, economics and logic [14,23,29], we assume that (i) desires and values are competing motivational attitudes, and (ii) the agent’s degree of morality is a function of its disposition to promote the fulfilment of its values at the expense of the satisfaction of its desires. The following definition extends the notion of ethical planning domain by the notions of desire and introduces the novel concept of degree of morality. Definition 13 (Mixed-motive planning domain). A mixed-motive planning domain is a tuple Γ = (γ, s0 , Ω, ΩD , μ) where U. Grandi et al. – (γ, s0 , Ω) is an ethical planning domain (Definition 9); – ΩD ⊆ LLTLf is the agent’s set of desires or goals; – μ ∈ {1, . . . , dg(Ω) + 1} is the agent’s degree of morality. A mixed-motive planning domain induces an ethical planning domain whereby the agent’s set of desires is treated as a set of values whose priority level depends on the agent’s degree of morality. Specifically, the lower the agent’s degree of morality, the higher the priority of the agent’s set of desires in the induced ethical planning domain. In many practical applications it is likely to be desirable to restrict the range of values that μ can take, in order to prevent (for example) the robot’s goal from overriding its safety values. Definition 14 (Induced ethical planning domain). Let Γ = (γ, s0 , Ω, ΩD , μ) be a mixed-motive planning domain. The ethical planning domain induced by Γ is the tuple Δ = (γ, s0 , Ω ) such that dg(Ω ) = dg(Ω) + 1 with: (i) Ωμ = ΩD ; (ii)Ωk = Ωk for 1 ≤ k < μ; (iii) Ωk = Ωk−1 for μ < k ≤ dg(Ω) + 1. An Example Consider a blood delivery robot in a hospital. The robot mostly makes deliveries between different storage areas, and sometimes delivers blood to surgeries. The robot may have to deal with various kinds of obstacles to complete its deliveries, but we will consider only one: people blocking the robot. The robot has two methods to resolve this obstacle, it can ask for them to move and then wait for them to move (ask), or it can use a loud air-horn to “force” them to move (horn). Once the person has moved, the robot can reach its destination (move). We suppose that the robot can tell some things about its environment, it knows if it is blocked (blocked), if it is near the operating theatre (theatre) and if it has reached its destination (destination). We can then define the action model as follows: γ + (move, destination) = ¬blocked γ − (ask, blocked) = blocked γ + (ask, delayed) = γ − (horn, blocked) = blocked γ + (horn, annoyed) = γ + (horn, dangerous) = theatre otherwise, γ ± (a, p) = ⊥ Logic-Based Ethical Planning The propositions delayed, annoyed and dangerous are used to keep track of the robot’s actions, we suppose that using the horn near the operating theatre is dangerous. The values and desires of the robot can be presented as follows: Ω = {Ω1 , Ω2 } Ω1 = {G¬dangerous} Ω2 = {G¬annoyed} ΩD = {Fdestination, F(destination ∧ ¬delayed)} In words, the robot’s goal is to reach its destination without delays, with the primary value to never do anything dangerous, and the secondary value to never be annoying. Let Ω be the value base induced by Ω, ΩD and μ = 3. Now we can compare the following 2-plans π1 = (ask, move) and π2 = (horn, move). If we assume that in the initial state the robot is blocked but far from an operating theatre, we can represent the histories generated from these plans as follows (each block contains exactly the propositions that are true in that state): ask H π1 blocked H π2 blocked delayed annoyed delayed, destination annoyed, destination In this case Sat(Ω ,π1 ,s0 ,γ) = {G¬dangerous, G¬annoyed, Fdestination} = A ⊇ Ω1 ∪ Ω2 whereas Sat(Ω ,π2 ,s0 ,γ) = G¬dangerous, Fdestination, F(destination ∧ ¬delayed) = B ⊇ Ω1 ∪ ΩD . Therefore π1 will be preferred to π2 . However, if we change the morality level to 2, perhaps to represent an urgent delivery to an ongoing surgery, then we see that the robot will choose plan π2 rather than π1 . This illustrates how we can adjust the morality level of the robot to reflect the urgency of its goals. If we move the example to the operating theatre (so now theatre ∈ s0 instead of ¬theatre ∈ s0 ), then the robot would not sound its horn even if the delivery was urgent, as Ω1 still overrides ΩD . This also means that for this robot we should restrict μ to 2 or 3 to ensure that being safe is always prioritised over goals. Furthermore, notice that for any lexicographic value structure containing exactly these values and goals, the set of non-dominated plan will always contain either π1 , π2 or both, since A and B are exactly the qual-minimal contractions of ∪Ω given an initial state where the robot is blocked. Computational Complexity In this section we initiate the study of the computational complexity of ethical planning in our setting. We borrow our terminology from the work of Lang [17] U. Grandi et al. on compact preference representation, but the problems we study have obvious counterparts in the planning literature, as should be clear from the proofs. In the interest of space all proofs can be found in the appendix. We begin by studying the problem Conflict, which determines if a moral problem is also a moral conflict. Conflict Input: Moral problem M = (Ω, γ, s0 ) Question: Is there some k ∈ N such that there is a k-plan π such that Sat(Ω,π,s0 ,γ) = Ω? Theorem 1. Conflict is PSPACE-complete. We then study the case of contractions, in particular, determining if a given moral problem is a qual-minimal contraction. Minimal-Contraction Input: Moral problem M = (Ω, γ, s0 ), moral problem M = (Ω , γ, s0 ) Question: Is M a qual-minimal contraction of M ? Theorem 2. Minimal-Contraction is PSPACE-complete. Neither of these results are particularly technically advanced, indeed Conflict is almost exactly equivalent to PLANSAT from classical planning [8]. The purpose of these results is to indicate that quite apart from the issue of how a robot should select the best option when faced with a moral conflict, the task of identifying that the robot is facing a moral conflict and determining all of its options is extremely computationally difficult. On the subject of planning, we begin by studying the problem Comparison, π2 . Despite the which given two k-plans π1 and π2 , asks whether π1 qual Δ apparent complexity of our setting this problem can be solved efficiently: Comparison Input: Ethical planning domain Δ = (γ, s0 , Ω), k ∈ N, k-plans π1 , π2 Question: is it the case that π1 qual π2 ? Δ Theorem 3. Comparison is in P. We then move to the problem of non-dominance, i.e., the problem of determining if given a g-plan π1 there exists a better k-plan wrt. qual (where g ≤ k). Δ Non-dominance Input: Ethical planning domain Δ = (γ, s0 , Ω), k ∈ N, g-plan π for g ≤ k Question: is there a k-plan π such that π qual π and π qual π? Δ Δ We show that this problem, as most instances of classical planning satisfaction, is PSPACE-complete: Theorem 4. Non-Dominance is PSPACE-complete. Logic-Based Ethical Planning Proposition 4. Given an ethical planning domain Δ = (γ, s0 , Ω), a k-plan π and S = Sat(∪Ω,π,s0 ,γ) π is non-dominated for Δ if and only if M = (S, γ, s0 ) is a Ω-qual-minimal contraction for (∪Ω, γ, s0 ). Theorems 3 and 4 are to be interpreted as baseline results showing the computational feasibility of our setting for ethical planning with LTLf . One clear direction for future work would expand on the computational complexity analysis, identifying tractable fragments and exploring their expressivity in ethical applications. An important property for an ethical planner is explainability. While explaining why a particular plan was chosen is difficult to do succinctly (even for humans), a simpler problem is to explain why the chosen plan was better than another proposed alternative. Our approach enables this in a way that is both computationally straightforward and intuitively understandable to humans, since by the lexicographic ordering of plans there always exists a single value or set of values that decides between two plans. We put forward a novel setting for ethical planning obtained by combining a simple logical temporal language with lexicographic preference modelling. Our setting applies to planning situations with a single agent who has deterministic and instantaneous actions to be performed sequentially in a static and known environment. Aside from the addition of values, our framework differs from classical planning in two aspects, by having multiple goals and by allowing temporal goals. In particular, the expressiveness of LTL means that we can express a wide variety of goals and values, including complex temporal values such as “if the weather is cold, close external doors immediately after opening them”, with a computational complexity equivalent to that of standard planners. As a limitation, the system is less able to express values that tend to be satisfied by degree rather than absolutely or not at all. Among the multiple directions for future work that our definitions open, we plan to study the multi-agent extension with possibly conflicting values among agents, moving from plans to strategies (functions from states or histories to actions), from complete to incomplete information, and, most importantly, test our model by implementing it in simple robotics scenarios. Furthermore, given the computational complexity of Conflict, Mininal-Contraction and Non-Dominance, it may often be the case that in practical applications we cannot guarantee finding a non-dominated plan. Therefore, it would be valuable to find more tractable algorithms that at least guarantee some degree of approximation of a non-dominated plan, or restrictions (likely to the language or action theory) that improve tractability of the problem. Acknowledgements. This work is supported by the CNRS project LEXIA (“The Logic of Explanation: From Explainable to Explaining Legal Knowledge-based Systems”). U. Grandi et al. References 1. Alili, S., Alami, R., Montreuil, V.: A task planner for an autonomous social robot. In: Asama, H., Kurokawa, H., Ota, J., Sekiyama, K. (eds.) Distributed Autonomous Robotic Systems 8. Springer, Berlin, Heidelberg (2009). https://doi.org/10.1007/ 978-3-642-00644-9_30 2. Anderson, M., Anderson, S.L.: Geneth: a general ethical dilemma analyzer. Paladyn (Warsaw) 9(1), 337–357 (2018) 3. Arkin, R.C., Ulam, P., Wagner, A.R.: Moral decision making in autonomous systems: enforcement, moral emotions, dignity, trust, and deception. Proc. IEEE 100(3), 571–589 (2012) 4. Awad, E., et al.: When is it acceptable to break the rules? Knowledge representation of moral judgement based on empirical data. CoRR abs/2201.07763 (2022) 5. Benzmüller, C., Parent, X., van der Torre, L.W.N.: Designing normative theories for ethical and legal reasoning: logiKEy framework, methodology, and tool support. Artif. Intell. 287, 103–348 (2020) 6. Berreby, F., Bourgne, G., Ganascia, J.: A declarative modular framework for representing and applying ethical principles. In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems (AAMAS) (2017) 7. Bienvenu, M., Fritz, C., McIlraith, S.A.: Planning with qualitative temporal preferences. In: Doherty, P., Mylopoulos, J., Welty, C.A. (eds.) Proceedings of the 10th International Conference on Principles of Knowledge Representation and Reasoning (KR), pp. 134–144. AAAI Press (2006) 8. Bylander, T.: The computational complexity of propositional STRIPS planning. Artif. Intell. 69(1–2), 165–204 (1994) 9. Copp, D.: The Oxford Handbook of Ethical Theory. Oxford University Press, Oxford (2007) 10. De Giacomo, G., Vardi, M.Y.: Linear temporal logic and linear dynamic logic on finite traces. In: Rossi, F. (ed.) Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI), pp. 854–860. IJCAI/AAAI (2013) 11. Dennis, L.A., Fisher, M., Slavkovik, M., Webster, M.: Formal verification of ethical choices in autonomous systems. Robot. Auton. Syst. 77, 1–14 (2016) 12. Dennis, L.A., del Olmo, C.P.: A defeasible logic implementation of ethical reasoning. In:1st International Workshop on Computational Machine Ethics (CME) (2021) 13. Evans, K., de Moura, N., Chauvier, S., Chatila, R., Dogan, E.: Ethical decision making in autonomous vehicles: The AV ethics project. Sci. Eng. Ethics 26(6), 3285–3312 (2020) 14. Harsanyi, J.: Utilitarianism and beyond. In: Sen, A.K., Williams, B. (eds.) Morality and the Theory of Rational Behaviour. Cambridge University Press, Cambridge (1982) 15. Jenkins, R., Talbot, B., Purves, D.: When robots should do the wrong thing. In: Robot Ethics 2.0. Oxford University Press, New York (2017) 16. Kment, B.: Varieties of Modality. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Spring (2021) 17. Lang, J.: Logical preference representation and combinatorial vote. Ann. Math. Artif. Intell. 42(1–3), 37–71 (2004) 18. Levi, I.: Hard Choices: Decision Making Under Unresolved Conflict. Cambridge University Press, Cambridge (1990) Logic-Based Ethical Planning 19. Lindner, F., Mattmüller, R., Nebel, B.: Moral permissibility of action plans. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI), pp. 7635–7642. AAAI Press (2019) 20. Loreggia, A., Mattei, N., Rossi, F., Venable, K.B.: On the distance between cpnets. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS) (2018) 21. Loreggia, A., Rossi, F., Venable, K.B.: Modelling ethical theories compactly. In: The Workshops of the 31st AAAI Conference on Artificial Intelligence (2017) 22. Lorini, E.: A logic for reasoning about moral agents. Logique Analyse 58(230), 177–218 (2015) 23. Lorini, E.: Logics for games, emotions and institutions. FLAP 4(9), 3075–3113 (2017) 24. Lorini, E.: A logic of evaluation. In: Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 827–835. ACM (2021) 25. Müller, V.C.: Ethics of artificial intelligence and robotics. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Summer 2021 (2021) 26. Pnueli, A.: The temporal logic of programs. In: Proceedings of the 18th Annual Symposium on Foundations of Computer Science (FOCS) (1977) 27. Powers, T.M.: Deontological machine ethics. In: Anderson, M., Anderson, S.L., Armen, C. (eds.) Association for the Advancement of Artificial Intelligence Fall Symposium Technical Report (2005) 28. Rossi, F., Mattei, N.: Building ethically bounded AI. In: The 33rd AAAI Conference on Artificial Intelligence (AAAI) (2019) 29. Searle, J.: Rationality in Action. Cambridge University Press, MIT Press (2001) 30. Sen, A.: On Ethics and Economics. Basil Blackwell, Oxford (1987) 31. Vallor, S.: Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press, New York (2016) 32. Vanderelst, D., Winfield, A.F.T.: An architecture for ethical robots inspired by the simulation theory of cognition. Cogn. Syst. Res. 48, 56–66 (2018) 33. Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V.R., Yang, Q.: Building ethics into artificial intelligence. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI) (2018) A Hybrid Recommender System with Implicit Feedbacks in Fashion Retail Ilaria Cestari1,2 , Luigi Portinale2,3(B) , and Pier Luigi Riva1 1 ORS Group, Roddi, Italy {ilaria.cestari,pierluigi.riva}@ors.it Computer Science Institute, DiSIT, Univ. Piemonte Orientale, Alessandria, Italy [emailprotected] 3 Inferendo srl, Alessandria, Italy Abstract. In the present paper we propose a hybrid recommender system dealing with implicit feedbacks in the domain of fashion retail. The proposed architecture is based on a collaborative-filtering module taking into account the fact that users feedbacks are not explicit scores about the items, but are obtained through user interactions with the products in terms of number of purchases; moreover, a second module provides a knowledge-based contextual post-filtering, based on both customer-oriented and business-oriented objectives. We finally present a case study where “look-oriented” recommendations have been implemented for a specific fashion retail brand. Keywords: Recommender systems architecture · Fashion retail · Implicit feedbacks · Hybrid Recommender Systems (RS) are software products based on machine learning having the goal of learning user preferences for specific items or services in very different contexts, particularly e-commerce and on-line retail. They can employ various methods such as collaborative filtering, content-based, hybrid, and knowledge-based approaches [13]. The most widely adopted approaches are those based on collaborative filtering; the idea is that user preferences about specific items can be captured by looking at the interactions such users have on the set of available items. In general, one can think to the user-item interaction as a “feedback” the user provides with respect to the item. Formally, given a set of m users U , a set of n items I and a set of possible feedbacks F , we can define a feedback matrix R(m×n) = {rij = f |i ∈ U, j ∈ I, f ∈ F }. In the most general case, values in set F are ranked preferences expressed in natural numbers (e.g., from 1 stars up to 5 stars). In this situation we talk about explicit feedbacks, and a special case is that of binary feedbacks, where F = {0, 1} (i.e., like, dislike). However, very often users are not able or willing to leave explicit feedbacks, and what c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 212–224, 2023. https://doi.org/10.1007/ A Hybrid Recommender System with Implicit Feedbacks in Fashion Retail can be done is to “count” the interactions between users and items. In this case rij ≥ 0 is just the number of times user i has interacted with item j. Interactions must be defined as specific actions such as item search, item view, item purchase or others. Of course, different kind of interactions can have different meaning, leading to different information concerning the user preferences. For instance, some actions can be considered as positive (purchase, addition to cart), while others are actually negative (removal from cart). Positive and negative actions should be treated according to their meaning (for example adding 1 to rij if the action is positive and subtracting 1 when it is negative). Moreover, even if all the actions are positive, they may have different relevance (e.g., a search is usually less indicative of a preference than a purchase); the different role of the different actions can be taken into account directly into the collaborative filtering process [6, Chapter 4], or by resorting to some hybrid form of recommendation taking into account multiple kinds of knowledge [2,9]. In the present paper we consider implicit feedbacks in the form of positive interactions, counting the number of purchases of an item by a given user. We addresses the problem of implicit feedbacks by resorting to the confidence model proposed in [8], and by adopting a hybrid architecture where collaborative filtering is complemented with a knowledge-based subsystem taking into account specific business rules. The considered domain is that of fashion retail. The paper is organized as follows: in Sect. 2, the main concepts about the collaborative filtering and the approach proposed to deal with implicit feedbacks are discussed; Sect. 3 illustrates the proposed hybrid architecture, focusing on each module, explaining how they have been developed, and in Sect. 4 a case study illustrating the different steps of the recommendation process implemented for an important fashion retailer is discussed. Section 5 finally reports the conclusions and some comparisons with related works. Collaborative Filtering with Implicit Feedbacks Collaborative Filtering (CF) produces user specific recommendations based on patterns of user actions such us ratings or item usage, without the need of explicit user or item meta information. One of the most popular CF approach is the latent factor model, a model based technique based on the low-rank factorization of the feedback matrix [14]. It addresses the sparsity problem by projecting users and items into a reduced latent space containing the most salient features about their interactions. Given the feedback matrix R(m×n) , the idea is to decompose it into two matrices U(m×k) and V(n×k) such that T R(m×n) ≈ U(m×k) V(k×n) where k n and k m is the size of the latent space and U, V are the latent feature matrices for users and items respectively. Once this factorization is obtained, if ui represents the i-th row of matrix U and vj the j-th row of matrix V , we can predict the feedback of each pair (i, j) of users and items as I. Cestari et al. rˆij = ui · vjT = T uih vhj that is computable for each user-item pair, even for those having a missing entry in the original matrix R. Let the set of all user-item pairs (i, j), which are observed in R, be denoted by S: S = {(i, j) : rij is observed}. A typical way of solving this factorization problem involves an optimization procedure (e.g., stochastic gradient descent) on the following objective function J(U, V ) = 1 ( (rij − rˆij )2 + λ(ui 2 + vj 2 )) 2 (i,j)∈S where λ is a regularization hyper-parameter. The above characterization is suitable when the entries encoded into the feedback matrix are explicit, such as precise ratings provided by the users. In case of implicit feedbacks, which are those that interest us in the present work, some modifications to the framework must be considered. First of all, we must notice that in our case only positive interactions are considered, leading to the fact that when the feedback is not null, then there is an interest of the user for the corresponding item. We then introduce an auxiliary indicator variable pij representing the generic interest of user i for item j and simply defined as 1 if rij > 0 pij = 0 otherwise Following the approach suggested in [8], we also consider a confidence level for the indicator pij , depending on the actual number of interactions a user had on a given item and defined as follows: cij = 1 + αrij Given this characterization we have to find a user matrix U and an item matrix V minimizing the following cost function: J(U, V ) = 1 ( cij (pij − ui · vjT )2 + λ( ui 2 + vj 2 )) 2 i,j i j In other words, we need to find a vector ui for each user and a vector vj for each item factorizing in the best possible way the user preferences. Preferences are represented by pij and must be computed as the inner product ui · vjT . The main differences with the explicit feedback framework is that we need to take into account confidence levels cij , but mostly the fact that we need to consider every possible user-item pair (i, j) and not only those pairs for which we have an explicit interaction. This makes standard optimization procedures such as stochastic gradient descent impractical. A possible solution is to adopt Alternating Least Squares (ALS) optimization. The idea is conceptually simple: fix the user matrix U and find the optimal item matrix V ; then fix the item A Hybrid Recommender System with Implicit Feedbacks in Fashion Retail matrix V and find the optimal user matrix U ; keep alternating the previous steps until convergence. However, the implicit feedback framework requires a careful strategy to deal with a dense cost function (all possible user-item pairs must be considered) and to integrate the confidence levels. In [8], Hu et al. proposes the following procedure. First we compute user factors from item factors contained in V(n×k) . – Compute the (k × k) matrix V T V in time O(k 2 n) i i be a diagonal matrix with Cjj = cij (the diagonal – For each user i, let C(n×n) contains the confidence in the preferences of the given user with respect to all n possible items); let also p(i) ∈ Rn be the vector containing preferences pij of user i. – the following expression1 minimizes the cost function in (1): ui = (V T C i V + λI)−1 V T C i p(i) In a similar fashion, once the user matrix U has been obtained we can recompute2 the entries of the item matrix as vj = (U T C j U + λI)−1 U T C j p(j) j where C(m×m) is the diagonal matrix with Ciij = cij (the diagonal contains the confidence in the preferences of the given item with respect to all m possible users), and p(j) ∈ Rm be the vector containing preferences pij for item j of every possible user. The procedure alternates the above user factors and item factors computation until convergence. Once the final matrices U and V have been computed, the K available items with the largest score pˆij = ui · vjT are recommended to user i. System Architecture The main goal of the present work is to define an architecture for the recommendation of products in the fashion domain. Recommender systems in fashion retail are usually integrated into e-commerce platforms or in digital marketing campaigns, as personalized recommendations generators [15]; in this setting it is really hard that users are able to release explicit “scores” on the products, thus the designed system must deal with the availability of feedbacks which are implicit by nature (i.e., user-item interactions such as purchases). Moreover, such a system also needs to fulfill specific objectives that can be divided into “customer-oriented” and”business-oriented”. Regarding customers, an important aspect of fashion recommendations is the ability to propose an 1 In [8] the authors shows that the corresponding computation can be performed in time O(k2 N + k3 m) where N is the total number of non-zero entries in the feedback matrix. Similarly to previous step we can show that the computation takes O(k2 N + k3 n) time steps. I. Cestari et al. overall look composed by different type of products that may fit well together and tailor to the user individual preferences. Indeed, fashion products belonging to different technical categories are commonly bought together, in order to obtain a given look which can fit the customer style; thus the level of accuracy may be reduced with the goal of achieving higher levels of diversity and novelty, and to prevent overspecialization over the customer past purchases. On the business point of view, the user satisfaction after buying the recommended products can improve the customer loyalty, while the goal of recommending products belonging to various categories can be helpful also to increase the cross-selling and, in a more general way, to let the user explore, and ideally buy, as more products as possible. For these reasons, we propose a hybrid architecture based on two main modules (see Fig. 1): Fig. 1. Hybrid system architecture. – a collaborative filtering module with implicit feedbacks – a knowledge-based post-filtering module 3.1 Collaborative Filtering Module We know that entities involved in the recommendation process are the customers (the set of users U ) and the products (the set of items I); in the considered domain, the interactions between them can be collected from the purchases history in various distribution channels (retail, outlet, web) and from any available system capable of tracking user activities, such as visiting products pages or searching for keywords, or ideally giving an explicit rating to purchased products. A Hybrid Recommender System with Implicit Feedbacks in Fashion Retail In the absence of such systems, purchases can be considered a good starting point to learn customers preferences and how they are distributed. Usually, transactions (S) are stored in relational databases as receipt rows represented with tuples: s = t, p, i, j, a where t is the transaction date, p, i, and j are the point of sale, customer and item identifier respectively, and a is the activity or transaction type. Rows may be decorated with other data about the transaction, e.g. whether the customer used a coupon, the actual purchase value or the chosen payment method; all of these can be used as further analysis to better characterize the customers. In the present work, we only consider transactions corresponding to purchases, e.g., a = 1; in addition, in the following discussion we are only interested in tracking which user has purchased a given item, in a given set of stores and in a particular time interval. We then define an indicator 1 if ∃s = t, , i, j, 1 fijt = 0 otherwise with symbol meaning “don’t care” (and last element of s equal to 1 meaning purchase). Hence, by projecting S on users and items, and by considering a given time interval T , the transaction table can be reduced to tuples having the following structure: fijt i, j, rij with rij = t∈T Finally, the (sparse) feedback matrix is defined a Rm×n = [rij ] where m = |U | and n = |I|. As described in Sect. 2, the feedback matrix is then used as the basis for the implementation of the collaborative filtering module: we first produce T , then compute the predicted feedback the factorization Rm×n = Um×k Vk×n T rˆij = ui · vj , and finally for a given user i ∈ U , we return the top k items j ∈ I with respect to rˆi,j . 3.2 Knowledge-Based Post-filtering Module The second module executes a knowledge-based post-filtering on the results list produced by the CF model. It allows the filter of the results by adding constraints based on product features or contextual data, such as location, time, or other domain-specific elements. In our case, constraints are defined by domain experts as business rules, which describe known relationships between the domain entities (users or items), and used to adjust the model results by either filtering out or replace some items, or by adding gain and penalties on the scores, in order to change the ranking. Such rules are defined using a GUI (Fig. 2). For each entity, a set of variables is defined representing its characteristics (e.g., the product description, the customer age or the estimated annual income). In addition, a set of actions available on specific instances is also defined; the main action, especially for item entities j ∈ I, is to select the instance and add I. Cestari et al. Fig. 2. An example of rule definition using the developed GUI: experts can choose the target entity and define multiple conditions with logical operators over their variables. it to an output list to proceed with the post-filtering operations. This is also the main goal of the rules in this architecture: they can be considered as queries targeting the selection of some recommended products on which to execute a specific filtering function. A rule is composed of a condition over some entity variables, and a consequence which determines the action the system must perform on the instances satisfying the constraint. Actions select instances whose values meet a given conditions or, in more complex cases, they link the status of one instance to that of the instances of another entity, in order to create an explicit correlation between them. For example, lets consider some features of the entities I (“Status”, “IsCurrentSeason” and “ConsumerGroup”) and U (“Age” and “Gender”). A rule representing a constraint on items is reported in (2), and a more complex rule that links the two entities in (3): I.Status = “Adoption” ∧ I.IsCurrentSeason = True U.Gender = “F ” ∧ U.Age ≥ 18 ⇒ I.ConsumerGroup = “M isses” The first rule selects every item in the “Adoption” production status and available for the current season; the second one is used to define a constraint on the entity I on the basis of the instance of U (i.e., the target user). Such rules can be applied in a modular way over the model results, by introducing the idea of “context”, which is an aspect of the domain a group of rules refers to. Following the framework proposed in [1], the aim is to implement a type of context-aware recommender system, with contextual post-filtering. For each context, the domain expert defines the filtering operations to perform on the instances selected by the context’s rules. The number of contexts may vary depending on the number of entities involved, or the different objectives that the system must achieve. In fashion retail, one usually consider almost two main contexts: the “catalog context”, containing constraints about products availability, which can change depending on the temporal context of the recommendation or other external causes; the “customer context”, that allows the retailers to define correlations between the features of products and customers. Next section describes how they can be applied in a specific case study. A Hybrid Recommender System with Implicit Feedbacks in Fashion Retail Case Study and Experimental Results In a fashion store or e-commerce platform, item features are usually organized hierarchically. The actual hierarchy can vary from one brand to another; in this paper we refer to a case study concerning an important fashion brand, where a given product belongs to a merchandise group characterized by the department store (outlet or retail), the technical category (such as trousers, shirts or skirts), and the “lifecycle” (fashion or basic). Inside a given merchandise group we identify different models; the model identifies a series of technical features like the garment materials, fitting and other specific characteristics that help to distinguish the product style. Finally for a given model we can specify “low-level” features such as color or size of the garment (see Table 1). Table 1. Hierarchy of products features with an example case of a men’s dress shirt. Hierarchy level Merchandise Group Indicates the product’s department, technical category and lifecycle Retail, Dress Shirt, Fashion Group Type The target customer group Id that identifies the stylistic features of the product Regular Point Collar Color (Style) Color shade Blue Navy Stock Keeping Unit (SKU), unit sold and registered in the receipts Customers buy products at the lowest hierarchy level: the so called Stock Keeping Unit (SKU). Each SKU represents a specific garment as an instance of a merchandise group (store type, technical category and lifecycle) with a specific model, size and color. Typically, all these features are categorical and may have a broad range of values. The CF model discussed in Sect. 2 does not use the explicit features of users and products to learn their latent factors, so it is crucial to decide over which items the preferences should be determined. By considering the feedbacks as purchases at the SKU level, the user-item preference is implicitly computed over all the product characteristics, thus the model will propose the most preferable SKUs for the users. Depending on the recommender’s final objectives, it may be useful to consider a more abstract level of attributes (i.e., to get rid of some details such as size and color for instance) or to group the values of some features. In this way, we can learn preferences over more abstract aspects, such as the style, instead of specific characteristics such as the size or the shade of color. Furthermore, this can help to reduce the number of user-item pairs and so the dimensionality of the feedback matrix. In the present case study, we are targeting look-oriented recommendations; in this situation, the size is too specific, related more to the user’s need to I. Cestari et al. find clothes suitable for her/his body, rather than to an actual preference, thus it should be excluded. On the other hand, color is a fundamental feature of fashion products, very representative of the user’s preferences, but with a huge number of “nuances”; this could led to recommendations which are too biased by a specific color shade. Hence, shades have been grouped into their main color (e.g., “blue” instead of “light blue”, “dark blue” or “navy blue”) to keep the recommendation more generic as possible, and to easily recommend different colors as well. We have taken into account the retail transactions history of the last two years, containing purchases from both stores and web distribution channels of the considered brand, and by selecting customers with at least 3 purchases; the final (sparse) feedback matrix Rm×n contained 5, 200, 649 positive interactions between m = 662, 964 loyal customers and n = 26, 185 modelcolor items. In the following, in order to provide a recommendation example, we consider a specific user case: a 30 years old man, for which some purchases are listed in the first column of Table 2. Table 2. Some of the customer purchases and the top 5 recommendations of the model with k = 500 factors. In the description of the recommended items are reported also their seasonality and production status. Purchased Ribbed Crew Socks Beige Men Ribbed Crew Socks Black Men (Ongoing, Design) 0.749 46943 Leather Belt Brown Men Casual Trousers Relaxed Plain Blue Men (Ongoing, Adoption) 0.710 99337 Supima Cotton Crewneck Sweater Stripe Blue Men Dress Shirt Blue Men (Fall, Adoption) 0.688 126718 Dress Shirt Stretch White Summer Men Ribbed Crew Socks Blue Men (Ongoing, Design) 0.686 46945 Dress Shirt Purple Fall Men Set Shorts Bermuda Dyed Beige 0.657 71958 Men (Fall, Adoption) Item id The model has been fitted in two versions: one with k > 1 factors, which is the main model, and one benchmark with 1 factor (equivalent to recommend the most popular items in terms of purchases); the model’s parameters have been tuned with a 10-fold cross-validation and the best results have been obtained with k = 500 factors, achieving an AUC score of 0.73 on the test set against the 0.56 of the benchmark model. If we indicate as Ii the set of items already purchased by user i, the output for the user i will be a list L of pairs (j, rˆij ) with j ∈ (I \ Ii ) and ranked by rˆij (see Sect. 2). Table 2 reports (last three columns) the top 5 items recommended by the model for the customer described above, together with the corresponding rˆij score. For the post-filtering module, three contexts have been defined: “catalog”, “customer”, and “look”. As described in Sect. 3.2, the catalog context determines A Hybrid Recommender System with Implicit Feedbacks in Fashion Retail which items are available at the time of the recommendation; it identifies two subsets Iout and Iv of unavailable and valid items respectively. The post-filtering operation consists in removing from the output list L any pair (j, rˆij ) for j ∈ Iout , by replacing it with the pair (j , rˆij ) where j ∈ Iv is the most similar (available) item to j (in case such a j exists). In the present case study, the similarity score between two items j1 , j2 has been computed as sj1 j2 = vj1 vjT2 , reusing the latent factors V learned by the CF model (see Sect. 2). The score of the new replacing item j is again computed from latent factors as rˆij = ui vjT (see again Sect. 2). The catalog context has the higher priority, and thus the strongest effect on the recommended items. For instance, considering rule (2) as the only rule in the catalog context, Table 4 reports the new top 5 items (first column), in which only product with ItemID=99337 is kept and the others have been replaced. Score s1 in Table 4 refers to the usual item relevance score rˆij (i being the current user and j the considered item). Customer context rules are applied after catalog rules and link customer and item characteristics with respect to known statistical correlations, such as relationships between age (or genre) and particular garment categories, relationships between an item price and how much the customer usually spends, and so on. However, the descriptive characteristics of loyal customers are not necessarily representative of their normal buying behaviour or style preferences: a common case is when registered customer buy products for family members or other people. Thus, the application of this context shouldn’t be too disruptive to the model’s output and consists in increasing the score of the current recommended items j as follows: (4) s2 = s1 (1 + P (j, Ci )) where s1 is the old item relevance score, Ci is the set of customer context rules whose antecedent is satisfied by features of user i, and P (j, Ci ) is a term computed as: t(j, Ci ) P (j, Ci ) = |Ci | where t(j, Ci ) is the number of the customer context rules in Ci whose consequent is satisfied by features of item j. The idea is to increment the score by a quantity proportional to the number of rules that are satisfied. In Table 3 some examples of customer context rules are shown. The item listed in Table 4 are those items resulting from first applying catalog rule (2), then by replacing items different than ItemId = 99337 (the only one satisfying the rule) with their most similar available items. Score s2 is the result of selecting the customer context rule 1 of Table 3 (the only one satisfied by the current user) and applying formula (4). Finally, the look context has been added to introduce more diversity between the recommended product categories, in order to increase the cross-selling. In this phase, items present in the current recommended list are penalized with a term c(Tj ), which represents how many times the technical category Tj of item I. Cestari et al. Table 3. Rules available in the customer context. Given the target customer’s profile, since his parental status is unknown, rule 1 his the only that can be added to the customer’s specific context Ci , and thus |Ci | = 1. Rule Definition 0 Gender = “F ” ∧ Age ≥ 18 ⇒ ConsumerGroup = “M isses” Gender = “M ” ∧ Age ≥ 18 ⇒ ConsumerGroup = “M en” Children = “Y es” ⇒ ConsumerGroup = (“Boys” ∨ “Girls”) Table 4. Changes in scores after applying the catalog and the customer contexts. Here, listed items are available in the current season and in adoption status (Lc1 ) and all have had an increase by the customer context, since all the visible items belong to the “Men” consumer group. Item Score s1 Score s2 Description 99337 0.710 Casual Trousers Relaxed Plain Blue Men 99341 0.489 Pants Lightweight Stretch Chino Gray Men 74393 0.437 Shoes Leather Boat Blue Men 57871 0.436 Boots Field Chukka Brown Men 99302 0.421 Casual Trousers Relaxed Plain Blue j has already appeared in higher rank positions in the list. The new score for each item is computed as: sij sij = c(Tj ) Table 5 shows the results of applying this penalty score to the items of Table 4. Here it is not necessary to define explicit rules, because the post-filtering operation is performed on every item without any selection; notice that in principle, one could also integrate business rules to replace the penalized items with others “compatible” with those in the list. Table 5. Penalties assigned by the look context and the final score of each recommended item Item Score Tech. category Penalty New score 99337 1.420 CASUAL TROUSERS 1 99341 0.976 CASUAL TROUSERS 2 74393 0.874 SHOES 57871 0.871 SHOES 99302 0.842 CASUAL TROUSERS 3 A Hybrid Recommender System with Implicit Feedbacks in Fashion Retail Conclusions and Related Works As reported in [5], fashion and apparel industries have grown tremendously over the last years, especially because of the availability of a great amount of products in online stores, coupled with the support provided by recommender systems. One specific challenge is the large vocabulary of distinct fashion items, leading to very sparse user-item interaction matrices, often represented with a given overspecification level (as we discussed above). Other issues in fashion recommendation, are related to the suggestion of a suitable “look” or outfit [4,10], as well as the evolution of a fashion trend across time and location [11]. The vocabulary problem is often tackled through computer vision techniques for the determination of item category and attributes [3,7], while the other issues can be addressed by learning suitable models via massive amount of social data [16], or customer reported information [12], as well as customer reviews [17]. In the present work, we have dealt with the above issues by resorting to a hybrid architecture, where collaborative filtering is complemented with specific contextual knowledge-based rules. The cons is that expert knowledge must be elicited, in order to build the contextual rules; however, as we have outlined in the case study, the fashion domain provides precise contextual situations where such rules can be obtained from experts without a huge effort. The experience gained in this application suggests that the approach is feasible and beneficial. References 1. Adomavicius, G., Mobasher, B., Ricci, F., Tuzhilin, A.: Context-aware recommender systems. AI Mag. 67–80 (2011) 2. Burke, R.: Hybrid recommender systems: survey and experiments. User Model. User-Adap. Int. 12(4), 31–370 (2002) 3. Chen, H., Gallagher, A., Girod, B.: Describing clothing by semantic attributes. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7574, pp. 609–623. Springer, Heidelberg (2012). https://doi.org/ 10.1007/978-3-642-33712-3 44 4. Chen, W., et al.: POG: personalized outfit generation for fashion recommendation at Alibaba iFashion. In: Proceedings of 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2662–2670 (2019) 5. Deldjoo, Y., et al.: A review of modern fashion recommender systems. ACM Comput. Surv. 37(4), 111:1–111:35 (2021) 6. Dunning, T., Friedman, E.: Practical Machine Learning: Innovations in Recommendation. O’Reilly, Sebastopol (2014) 7. Ferreira, B., Costeira, J., Sousa, R., Gui, L.Y., Gomes, J.: Pose guided attention for multi-label fashion image classification. In: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCVW 2019), pp. 3125–3128 (2019) 8. Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedbacks datasets. In: Proceedings of 8th IEEE International Conference on Data Mining (ICDM), pp. 263–272 (2008) 9. Koren, Y., Bell, R.: Advances in collaborative filtering. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 77–118. Springer, Boston (2015). https://doi.org/10.1007/978-1-4899-7637-6 3 I. Cestari et al. 10. Lin, Y.L., Tran, S., Davis, L.: Fashion outfit complementary item retrieval. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020), pp. 3311–3319 (2020) 11. Matzen, K., Bala, K., Snavely, N.: Streetstyle: exploring world-wide clothing styles from millions of photos. CoRR abs/1706.01869 (2017). http://arxiv.org/abs/1706. 01869 12. Parr, J., Pookulangara, S.: The impact of true fit technology on consumer confidence in their online clothing purchase. In: Proceedings of Annual Conference on International Textile and Apparel Association. Iowa State University Press (2017) 13. Ricci, F., Rokach, L., Shapira, B.: Recommender Systems Handbook, 2nd edn. Springer, New York (2015). https://doi.org/10.1007/978-1-4899-7637-6 14. Takacs, G., Pilaszy, I., Nemeth, B., Tikk, D.: Scalable collaborative filtering approaches for large recommender systems. J. Mach. Learn. Res. 10, 623–656 (2009) 15. Walter, F., Battiston, S., Yildirim, M., Schweitzer, F.: Moving recommender systems from on-line commerce to retail stores. Inf. Syst. e-Bus. Manag. 10, 367–393 (2012) 16. Wen, Y., Liu, X., Xu, B.: Personalized clothing recommendation based on knowledge graph. In: Proceedings of International Conference on Audio, Language and Image Processing (ICALIP 2018), pp. 1–5 (2018) 17. Zhao, K., Hu, X., Bu, J., Wang, C.: Deep style match for complementary recommendation. CoRR abs/1708.07938 (2017). http://arxiv.org/abs/1708.07938 Incremental Timeline-Based Planning for Efficient Plan Execution and Adaptation Riccardo De Benedictis(B) , Gloria Beraldo , Amedeo Cesta , and Gabriella Cortellessa CNR - Italian National Research Council, ISTC, Via S. Martino della Battaglia 44, 00185 Rome, RM, Italy {riccardo.debenedictis,gloria.beraldo,amedeo.cesta, gabriella.cortellessa}@istc.cnr.it https://istc.cnr.it Abstract. The increasing deployment, in real environments, of intelligent and distributed systems like robotic platforms, wearable sensors and AI-based devices, requires robust solutions that allow planned activities to converge with the emerging dynamic reality. Once a planning problem has been solved, indeed, it needs to be executed and, in the real world, things might not go as expected. While planned activities may be carried out by some underlying reactive modules, in fact, the adaptation to the surrounding environment provided by such components may not be sufficient to achieve the planned goals. Planned activities, for example, can be delayed or last longer than expected. The execution of other activities could fail threatening the achievement of the desired goals. Finally, new objectives may emerge during execution thus requiring changes to ongoing plans. This paper presents a timeline-based framework for efficiently adapting plans in order to cope with possible complications which might emerge during execution. By exploiting the information gathered during the finding solution process, the proposed framework allows, efficiently and without overturning it, to adapt the generated plan in case of unexpected events during its execution. Empirical results show that, compared to re-planning from scratch, plan adaptations can be obtained more efficiently, reducing computational costs and consequently enhancing the ability of the whole system to react quickly to unexpected events. Keywords: Automated planning Timeline-based planning · Plan execution · Plan adaptation · Automated planning has been defined as “the reasoning side of acting” [24]. Planning, in particular, represents an abstract, explicit deliberation process that This work is partially supported by “SI-Robotics: SocIal ROBOTICS for active and healthy ageing” project (Italian M.I.U.R., PON – Ricerca e Innovazione 2014–2020 – G.A. ARS01 01120). c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 225–240, 2023. https://doi.org/10.1007/978-3-031-27181-6_16 R. De Benedictis et al. chooses and organizes actions by anticipating their expected outcomes. Although automated planning constitutes a rich technical field, however, most of the literature on domain-independent planning is biased towards that “reasoning” side [36]. Whether due to a partial knowledge of the world, or to the impossibility of predicting the actions of other agents that autonomously act in the same environment, a large part of any agent’s behavior can be traced back to its ability to react to dynamic changes occurring, or predicted, in the world. Unlike other approaches that propose integration of planning systems into the executives of the autonomous ones [6,31,38], this paper has the twofold objective of: a) concentrating on a specific form of planning, called timeline-based [34]; and b) proposing a new framework which, by exploiting the knowledge acquired during the previous reasoning processes, is able to adapt plans more efficiently than by adopting from scratch re-planning. The reasons for concentrating on timeline-based planning are mainly due to the fact that the constraints, introduced among the different elements of the plan during the reasoning process, produce a partial plan [47] which, compared to total order plans usually generated by more classical approaches, often result more suitable for adaptations during plan execution. Despite its ability to adapt dynamically, however, this type of formalism is particularly expressive and, consequently, is associated with a high computational complexity which makes the reasoning process significantly onerous, with consequent long computation times. While in [13] it has been demonstrated that the computation time could be effectively reduced, thanks to the introduction of some domainindependent heuristics, herein we focus on showing how the introduction of some of the data structures necessary for the computation of the above heuristics, as will be detailed further on, allows as well to efficiently manage the dynamic adaptation of the plans during their execution. Related Works The problem of the dynamic adaptation of plans has already been tackled from various points of view. Some approaches, such as those relying on simple temporal networks with uncertainty [32,33,50] or those based on model-checking [2,3], aim to generate robust solutions that don’t require (or, in any case, that minimize) the need for adaptations at runtime. Although desirable in those contexts in which certain safety conditions in interacting with people are required (such as, for example, in industrial contexts), these approaches prove to be unattractive in situations with fairly free interactions (e.g., navigation or dialogues) between the user and the machine. Unlike in standard planning problems, solving a contingency planning problem, as described in [39,48], does not generate a plan, but a decision tree with different contingent branches that could arise at execution time, whose execution allows the agent to achieve the planned objectives. These approaches allow for practically immediate adaptation at runtime and therefore, once the solution is found, they are probably the best possible choice. Nonetheless, these approaches require to consider, in the problem solving phase, all the possible events that may Incremental Timeline-Based Planning occur during the execution, making the reasoning process particularly burdensome even for relatively simple problems. Furthermore, the approaches adopted in contingency planning rarely manage forms of numerical and/or temporal reasoning. Nebel et al. compare the advantages of re-planning from scratch and of reusing information from the old plan to generate a new one, showing that, from a theoretical point of view, the two approaches have, not surprisingly, the same computational complexity [37]. By relying on such theoretical results, approaches like ROSPlan [6] generate a new planning problem all over again whenever an exogenous event, incompatible with the current solution, occurs. These approaches have the great advantage of being able to use any existing planner as a black-box. However, it has the obvious disadvantage of potentially taking a lot of computational time whenever some exogenous event requiring adaptation occurs. Despite the theoretical results, indeed, further studies, such as [16,22,28,40,41], show that plan adaptation can be, in practice, more effective than re-planning. Such repair approaches, furthermore, help to maintain the plan stability, that is, how close the newly generated plan is to the one that it must replace. The approach proposed in this document is situated within the latter context. Unlike the cited approaches, however, we focus on a particular class of automated planning which, in addition to explicitly allowing forms of temporal and numerical reasoning, relies on partial order planning and, hence, produces solutions that usually require a smaller number of causal adaptations when unexpected events occur at execution time. Particularly relevant to our approach, the Flexible Acting and Planning Environment (FAPE), introduced by [15], combines plan-space planning with simple temporal networks and hierarchical decomposition rules. A dispatcher calls for each planned action a set of skills and keeps track of their evolution over time allowing plan repair, extension, and re-planning, while being able to check and keep up to date the temporal relations and the causal constrains. Compared to FAPE, our architecture assigns a more central role to the acting component, giving it the ability to determine when and how to generate plans, execute them, adapt them or, if no longer needed as a consequence of a drastic change in the environment, destroy them and generate new ones. More than on architectural aspects, however, we focus, in this document, on the possibility of dynamically and efficiently adapting plans in the event of failures. When dealing with failures, indeed, FAPE is limited to the removal of just the one failing action, without considering cascades of other potential failures. Thanks to the adaptation of classical planning heuristics, as we will see, and similarly to what is done in the previously cited works applied to classical planning, we are able to overcome this limitation. Technical Background Timeline-based planning constitutes a form of deliberative reasoning which, in an integrated way, allows to carry out different forms of semantic and causal reasoning. Although this approach to planning has mostly been relegated to forms R. De Benedictis et al. of causal reasoning in the space domain, many solvers have been proposed over the time like, for example, IXTET [23], Europa [26], Aspen [11], the Trf [8,19] on which the APSI framework [20] relies and, more recently, PLATINUm [45]. Some theoretical works on timeline-based planning like [18,26] were mostly dedicated to identifying connections with classical planning a-la PDDL [17]. The work on IXTET and Trf has tried to clarify some keys underlying principles but mostly succeeded in underscoring the role of time and resource reasoning [9,29]. The planner CHIMP [44] follows a Meta-CSP approach having metaConstraints which havely resembles timelines. The already mentioned FAPE [4,15] tightly integrates structures similar to timelines with acting. The Action Notation Modeling Language (ANML) [42] is an interesting development which combines the Hierarchical Task Network (HTN) [7,35,49] decomposition methods with the expressiveness of the timeline representation. Finally, it is worth mentioning that the timeline-based approaches have been often associated to resource managing capabilities. By leveraging on constraint-based approaches, most of the above approaches like IXTET [10,29,30,43] or [46] integrate planning and scheduling capabilities. Finally, [12] proposes a recent formalization of timeline-based planning. Given the mentioned link with the heuristics we will refer, in this paper, to the timeline-based planning formalization as defined in [13]. According to this formalization, specifically, the basic building block of timeline-based planning is the token which, intuitively, is used to represent the single unit of information. Through their introduction and their constraining during the planning process, in particular, tokens allow to represent the different components of the high-level plans. In its most general form, a token is formally described by an expression like n (x0 , . . . , xi )χ . In particular, n is a predicate symbol, x0 , . . . , xi are its parameters (i.e., constants, numeric variables or object variables) and χ ∈ {f, g} is a constant representing the class of the token (i.e., either a fact or a goal ). The token’s parameters are constituted, in general, by the variables of a constraint network N (refer to [14] for further details) and can be used, among other things, to represent temporal information such as the start or the end of some tasks. The semantics of the χ constant, on the contrary, is borrowed from Constraint Logic Programming (CLP) [1]. Specifically, while the facts are considered inherently true, the goals must be achieved as defined by a set of rules. Rules, in particular, are expressions of the form n (x0 , . . . , xk ) ← r where n (x0 , . . . , xk ) is the head of the rule and r is the body of the rule. In particular, r represents the requirement for achieving any goal having the “form” of the head of the rule. Such requirements can be either a token, a constraint among tokens (possibly including the x0 , . . . , xk variables), a conjunction of requirements or a disjunction of requirements. It is worth noting the recursive definition of requirement, which allows the definition of the body of a rule as any logical combination of tokens and constraints. Similarly to CLP, through the application of the rules it is hence possible to establish and generate relationships among tokens. Compared to CLP, however, timelines introduce an added value: some tokens may be equipped with a special Incremental Timeline-Based Planning Fig. 1. Different timelines extracted by their associated tokens. object variable τ that identifies the timeline affected by the token. Different tokens with the same value for the τ parameter, in particular, affect the same timeline and, depending on the nature of the timeline, might interact with each other. There can, indeed, be different types of timelines. In case of state-variable timelines (see Fig. 1a), for example, different tokens on the same state-variable cannot temporally overlap. In case of reusable-resource timelines (see Fig. 1b), on the contrary, tokens represent resource usages and can, hence, overlap as long as the concurrent uses remain below the resource’s capacity. Given the ingredients mentioned above we can now formally introduce the addressed planning problem. A timeline-based planning problem, specifically, is a triple P = (O, R, r), where O is a set of typed objects, needed for instantiating the initial domains of the constraint network variables and, consequently, the tokens’ parameters, R is a set of rules and r is a requirement. Intuitively, a solution to such a problem should be described by a set of tokens whose parameters assume values so as to guarantee the satisfaction of all the constraints imposed by the problem’s requirement, by the application of the rules, as well as by the cumulative constraints imposed by the timelines. Unfortunately, the previous definition, although intuitive, is not easily translatable into a reasoning process which guarantees its achievement starting from the definition of the planning problem. For this reason, just like common partial-order planners, timeline-based planners often rely on the concepts of flaw and resolver. The planner, in particular, internally maintains a data structure, called token network, which represents a partial plan π = (T , N ), where T is a set of tokens whose parameters are constrained by the constraint network N . During the resolution process, the reasoner incrementally refines the current token network π by identifying its flaws and by solving them through the application of resolvers, while maintaining consistent the constraints of N . There can be, in general, different types of flaws, each resolvable by applying the corresponding resolvers. The achievement of a goal, for example, can take R. De Benedictis et al. place either through the application of a rule or through a unification with either a fact or another already achieved goal with the same predicate (i.e., the parameters of the current goal and the token with which is unifying are constrained to be pairwise equal). In case of disjunctions, introduced either in the initial problem or by the application of a rule, a disjunct must be chosen. The domain of all the variables that make up the token parameters must be reduced to a single allowed value. Finally, timelines must be consistent, possibly requiring the introduction of constraints which prevent not allowed overlaps. Thanks to the introduction of the flaw and resolver concepts, it is therefore possible to provide an implementable definition of solution. Specifically, a solution to a timelinebased planning problem is a flawless token network whose constraint network is consistent. 3.1 A Lifted Heuristic for Timeline-Based Planning Finding a solution to a timeline-based planning problem is far from simple. Choosing the right flaw and the right resolver, in particular, constitutes a crucial aspect for coping with the computational complexity and hence efficiently generating solutions. Taking a cue from classical planning heuristics, [13] describes how, by building a causal graph and by analyzing its topology, it is possible to estimate the costs for the resolution of the flaws and for the application of the resolvers. Flaws and resolvers, in particular, are seen as if they are, respectively, classical planning propositions and actions. The effect of applying a resolver is, intuitively, the resolution of a flaw (the sole positive effect of the corresponding classical action). In the case of the application of a rule or the choice of a disjunct in a disjunction, however, further flaws (the preconditions for the corresponding classical action) can be introduced. Starting from the initial facts, with a zero estimated resolution cost, the cost of applying a resolver can be estimated as an intrinsic cost of the resolver plus the maximum cost (hmax heuristic). The cost of resolving a flaw, on the other hand, is given by the minimum cost of its resolvers. Starting from the top-level goals present in the planning problem, initially estimated with infinite cost, a graph is constructed by proceeding backwards, considering all the possible resolvers for all the possible flaws. The estimated costs are updated every time a unification is found or in those cases in which the resolver does not introduce further flaws. Finally, the graph building procedure proceeds until a finite estimate cost for the top-level goals is reached. Compared to other state-of-the-art timeline-based solvers, the above heuristics allow solving problems up to one order of magnitude faster [13]. The most interesting aspect for the current topic, however, concerns the management of the causal constraints in the causal graph. Similar to planning models based on satisfiability [27], indeed, a set of propositional variables is assigned to flaws and to resolvers. For the sake of brevity we will use subscripts to indicate flaws (e.g., ϕ0 , ϕ1 , etc.), resolvers (e.g., ρ0 , ρ1 , etc.) as well as their associated propositional variables. Additionally, given a flaw ϕ, we refer to the set of its possible resolvers by means of res (ϕ) and, by means of cause (ϕ), to the set of resolvers (possibly empty, in case of the flaws of the problem’s requirement) which are responsible Incremental Timeline-Based Planning for introducing it. Moreover, given a resolver ρ, we refer to the set of its preconditions (e.g., the set of tokens introduced by the application of a rule) by means of precs (ρ) and to the flaw solved through its application by means of ef f (ρ). The introduction of such variables allows to constrain them so as to guarantee the satisfaction of the causal relations. Specifically, for each flaw ϕi , we guarantee that the preconditions of all the applied resolvers are satisfied (ϕi = ρk ∈cause(ϕi ) ρk (1)) and that at least one resolver is active whenever the flaw becomes active (ϕi ⇒ ρl ∈res (ϕi ) ρl (2)). Additionally, we need a gimmick to link the presence of the tokens with the causality constraint. A further variable σ ∈ {inactive, active, unif ied}, in this regard, is associated to each token. A partial solution will hence consist solely of those tokens of the token network which are active. Moreover, in case such tokens are goals, the bodies of the associated rules must also be present within the solution. Later on, we refer to tokens by means of the σ variables (we will use subscripts to describe specific tokens, e.g., σ0 , σ1 , etc.) and to the flaws introduced by tokens by means of the ϕ (σ) function. The last aspect to consider concerns the update of such variables as a consequence of the activation of a rule application resolver and of a unification resolver. Specifically, each rule application resolver ρa binds the σa variable of the goal token, whose rule has been applied, to assume the active value (formally, ρa = [ϕ (σa ) = active]). Finally, for each unification resolver ρu representing the unification of a token σu with a target token σt , the constraints ρu = [σu = unif ied] and ρu ⇒ [σt = active] guarantee the update of the σ variables while adding ϕ (σt ) to the preconditions of ρu guarantees the operation of the heuristic. 3.2 An Explanatory Example In order to better understand how the heuristics and the causality constraints work, we introduce in this section a very simple example of planning problem, whose objective is to plan a physical rehabilitation session for an hypothetical user. Figure 2 shows the causal graph which is generated for the problem, whose problem requirement is constituted by the Fig. 2. An example of causal graph for the sole goal σ0 . Estimated costs for flaws planning of a physical rehabilitation session. Tokens’ parameters are omitted to (boxes) and resolvers (circles) are on avoid burdening the notation. their upper right. The propositional variables that participate in the causal constraints are on their upper left. Solid (True) and dashed (Unassigned) contour lines are used to distinguish flaws’ and resolvers’ associated propositional variables’ values. In the figure, in particular, the ϕ0 variable, representing a flaw which is present in the problem requirement and therefore must necessarily be solved, assumes the True value. R. De Benedictis et al. It is worth noting that, in the example, the ϕ0 flaw, for achieving the σ0 goal, can only be solved through the ρ0 resolver, which is hence directly applied (notice the solid line) as a consequence of the propagation of the causal constraints. Since res (ϕ0 ) = {ρ0 }, indeed, the expression (2) translates into ϕ0 ⇒ ρ0 . This, in turn, forces the σ0 goal to assume the active value as a consequence of ρ0 = [ϕ (σ0 ) = active]. The ρ0 resolver, furthermore, represents the application of a rule having a P hysicalExercise () in the head and, in the body, a conjunction of the two σ1 and σ2 goals. The application of this resolver, in particular, introduces the ϕ1 = ϕ (σ1 ) and the ϕ2 = ϕ (σ2 ) flaws, each of which must necessarily be resolved as a consequence of the ϕ1 = ρ0 and ϕ2 = ρ0 causal constraints, from the expression (1). These flaws, in turn, can be solved through the application of the ρ1 and of the ρ2 resolvers which introduce, respectively, the disjunctions represented by the ϕ3 and ϕ4 flaws. Proceeding backwards, the propagation of the causal constraints no longer allows to infer what is present in the current partial plan (notice the dashed lines). The resolution of the ϕ3 and ϕ4 flaws, in particular, constitute two choices that the planner must make during the resolution process. The ϕ3 flaw, for example, can be solved either by applying the Disj 0 disjunct, represented by the ρ3 resolver, or by applying the Disj 1 disjunct, represented by the ρ4 resolver. The graph construction process, however, which proceeds following a breadth-first approach, has identified, in the example, a possible solution for the ϕ3 flaw by applying first the ρ3 resolver and then the ρ7 resolver (the latter corresponding, in this simple example, to a rule with an empty body). The heuristics’ estimated costs propagation procedure, hence, makes the ρ3 resolver, with an estimated cost of 2, much more attractive than the ρ4 resolver, with an estimated cost of ∞. For a similar reason, the ρ5 resolver will be preferred over the ρ6 resolver, leading to a (possible) solution of the planning problem. It is worth noting that, for the sake of simplicity, the tokens’ parameters are not represented in the example figure. All tokens, however, are endowed with numerical variables that represent the start and the end of the associated activities, appropriately constrained according to common sense. Upper and lower body exercises, for example, represented respectively by the σ1 and by the σ2 tokens, will take place as part of the more general physical exercise represented by the σ0 token. The σ3 and by the σ5 tokens, additionally, are endowed with their τ variables which will avoid their temporal overlapping if they will assume the same value. An Architecture for Deliberative and Reactive Reasoning In order to integrate the deliberative and reactive capabilities we have adopted an architecture that, from a high-level perspective, is depicted in Fig. 3. Taking inspiration from classical robotics architectures [21], specifically, our system consists of a deliberative tier responsible for the generation, the execution and the dynamic adaptation of the plans; a sequencing tier which, through the application of a policy (out of the scope of this paper), executes a sequence of Incremental Timeline-Based Planning actions according to the current state of the system; and a sensing and a controlling tier, which respectively interprets data produced by sensors and translates the sequencer’s actions into lower level commands for the actuators. Particularly interesting from an execution perspective, it is worth noting that the state, according to which actions are selected from the sequencer tier policy, is described by the combination of three distinct states: – the ss state, generated by the sensing tier and characterized as a consequence of the interpretation of sensory data, is able to represent, for example, the intentions of the users, the estimation of their current pose, the users’ emotions perceived from the camera, as well as situations which might be dangerous for both the users and for the robot; – the sc state, generated by the control tier, representing the state of the controllers such as whether the robot is navigating or not, or whether the robot is talking to or listening to the user; – the sd state, generated by the deliberative tier, representing the high-level commands generated as a result of the execution of the planned plans Similarly, the actions executed by the sequencer tier can be of three distinct types: – the as actions, towards the sensors, responsible, for example, for their activation or for their shutdown; – the ac actions, towards the controllers, responsible, for example, for activating contextual navigation commands as well as conversational interactions with the users; – the ad actions, towards the deliberative tier, responsible, for example, for the creation and for the adaptation of the generated plans. Fig. 3. The three-layer architecture. It is worth noting that, through the application of the π (s) policy, the sequencing tier can act both indirectly on the environment, through the ac actions, and, through the ad actions, introspectively on other higher-level forms of reasoning adopted by the agent itself. The high-level actions generated by the deliberative tier while executing the plans, moreover, constituting only one component among those that determine the choice of the actions by the policy, are not mandatory for the autonomy of the robot and represent a sort of “suggestions”, for the agent, on the things to do. Plan Execution and Possible Adaptations Once the graph has been built, the heuristics introduced in the previous section guide the resolution process by providing an indication of the flaws to be solved R. De Benedictis et al. (those with the most expensive estimate) through the application of their best resolvers (those with the least expensive estimate)1 . After a flawless partial plan has been found, it is time to execute it. An internal current-time variable, in particular, is incremented at each execution step (e.g., every second) and whenever its value is greater than or equal to the beginning (end) of unexecuted (executing) tasks, these are started (ended). The generated plan, however, must deal with the evolving reality of all the (often, unpredictable) events that can happen in the real world. More than simply dispatching the planned activities, indeed, the main challenge of executing a plan consists in modifying it to make it fit the current reality. The introduction of the causal graph, in particular, can also be useful during the execution of the plan whenever should it be necessary to update it. Coherently with what described in [25], the possible adaptations that a plan can undergo during its execution, specifically, represented by the ad actions of Fig. 3, can be of four types: temporal delays, in case tasks are not ready to start/end; variable freezes, to freeze, for example, a start variable of a task and prevent the violation of a duration constraint in case of delays on its termination; task failures, in case inconsistent constraints related to the task are introduced or unexpected events decrees its failure; and requirement additions, in case the underlying reactive module requires the introduction of a new requirement (e.g., a new goal). We have no theoretical results in this regard but it is worth noting that it is possible to build a new plan by incrementally introducing new requirements from scratch. Additionally, adding delays and failing tasks can bring the solver back to root level. These considerations, coherently with the theoretical results on classical planning, suggest that the cost of the adaptations, exception made for the freezes, is equal, asymptotically, to the cost of re-planning. Most of the time, however, an adapted plan has little difference from the original plan. Furthermore, the information gathered during the initial search can be exploited to generate an adapted plan. For this reason we aim to empirically show that adaptation, especially in those contexts where reactivity is required, can be advantageous. The pursued approach consists of introducing a new propositional variable, called ξ, that will be used, before starting the execution, to force the propagation of the execution constraints (i.e., delays and freezes). Additional propositional variables, called σiξ and associated to each token σi , will be used as the “reason” for the propagation of the execution constraints. Finally, these variables must be causally linked to the planned activities, so as to allow, as a consequence of the introduction of an inconsistent constraint, the removal from the plan of the corresponding activity. We obtainthis result by introducing, for each token σi , the clause ¬ξ, ¬ [σi = active] , σiξ . Once the planning problem has been solved, the assignment of the true value to the ξ variable will cause, through propagation, the assignment of the true value to the σiξ variables corresponding to those 1 There is, intuitively, no guarantee that the built graph contains a solution. Similarly to what happens in Graphplan [5], indeed, it might be required the addition of a “layer” to the graph. Incremental Timeline-Based Planning tokens σi which are in the current plan. Since variables σiξ are assigned at the last search level, they can be safely used as the “reason” for the propagation of the execution constraints. The introduction of an inconsistent constraint will first lead to the analysis of the introduced conflict, allowing to carry out a more targeted backtracking (a.k.a, backjumping [14]). Whenever possible, the tokens incompatible with the execution are eliminated and, subsequently, the resolution process is invoked to guarantee the satisfaction of the causal constraints. Finally, in case some delaying constraints cannot be added or, in the absence of alternative plans, some tokens cannot be removed, the false value is assigned to the ξ variable, decreeing the unexecutability of the plan and, consequently, the need to re-plan. Experimental Results Fig. 4. Adaptation vs re-planning from scratch in case of failures and new requirements. We have conducted some experiments to demonstrate the effectiveness of the proposed approach. Given the project needs, in particular, we focused on planning problems similar to those described in Sect. 3.2, in which the user has to carry out some physical and cognitive rehabilitation exercises to keep active and prolong his/her health well-being. In particular, series of physical exercises chosen from 14 different types (e.g., Chest press, Biceps curl, etc.) are planned in order to guarantee the training of all parts of the body. The exercises are repeated several times and with different characteristics depending on the profile of user. The interesting aspect regarding the current experimentation is that the user may refuse to perform these exercises, or he/she may have problems at doing them. For this reason, the planner must be ready to provide alternatives that still achieve the desired rehabilitative goals (i.e., training all the parts of the body). Whenever the user is particularly confident in carrying out the exercises, on the contrary, the system could add new tasks through the addition of new goals to the planning problem, hence requesting the adaptation of the plan for R. De Benedictis et al. taking into account the new requirements dynamically introduced. We have left out the temporal adaptations as they are less interesting and already managed by several existing frameworks. Figure 4a shows a comparison between adaptation and re-planning in 5 different generated plans. To demonstrate the effectiveness of the proposed approach, in particular, we artificially made the first, the second and the third activity fail during execution. This allows us to compare the adaptation times with the times required by the solver to generate a new plan from scratch without failing the task. It is worth noting how, often, the time required for subsequent adaptations takes less and less time. This is because the information collected during the previous searches (the topology of the causal graph and the no-goods learned during the search) are exploited to make the adaptation more effective. Figure 4b, on the contrary, compares the adaptation times with the re-planning times in the case of adding one and two new goals. Also in this case the information collected during the previous searches are exploited, reducing the calculation time necessary for the addition of new goals. In the event new plans have to be generated from scratch, on the contrary, these would have an increasing number of goals and would therefore require more and more calculation times. Although the reasoning times are relatively small, for this type of planning problems, we are talking about robots that interact with people. Reducing the computation times allows such robots to behave more fluidly in a dynamic environment, in which the activities fail easily and where new goals can emerge during the execution of the planned tasks. Whether it’s a failure or the addition of a new requirement, the sum of the reasoning times of the adaptations is significantly less than the sum of the reasoning times of the re-plannings. Furthermore, as the number of adaptation increases, there is an ever greater divergence between adaptive and re-planning behaviors, showing that the more dynamic the environment, the more advantageous the adaptive approach is. The word agent comes from the Latin word agere which means, in English, “to do”. Much of the literature on automated planning, however, focuses on those forms of reasoning that lead to the definition of a plan, rather than its actual application to the real world, hence neglecting much of that agere. Acting in the real world requires an agent to be able to adapt to the agent’s perception which might not necessarily be consistent with the expected plans, either because the agent’s knowledge is partial, or because of the impossibility to predict the behavior of other agents acting in the same environment. Much of an agent’s behavior in the real world is therefore related to reacting to its dynamical evolution, taking advantage, from time to time, of higher-level information coming from more deliberative forms of reasoning which in turn require high adaptability skills. In this paper we have presented some techniques that allow to realize these adaptation skills. An underlying reactive tier, in particular, continuously reacts to the environment’s dynamic changes. When perceiving particular situations, Incremental Timeline-Based Planning this module triggers adjustments to the planned tasks, which can range from introducing delays to the removal of some tasks till to the generation of (part of) new plans. Adapting a plan, in general, can be as complex as generating a new plan from scratch. Since an adapted plan is typically similar to the original plan, however, it is possible to exploit part of the information learned in the initial search to make the adaptation more efficient. Empirical results show that some of the data structures introduced to make the reasoning process more efficient can be exploited also to improve the dynamic adaptation of the plan to the emerging reality. References 1. Apt, K.R., Wallace, M.G.: Constraint Logic Programming Using ECLi PSe . Cambridge University Press, New York (2007) 2. Bensalem, S., Havelund, K., Orlandini, A.: Verification and validation meet planning and scheduling. Int. J. Softw. Tools Technol. Transfer 16(1), 1–12 (2014) 3. Bertoli, P., Cimatti, A., Roveri, M., Traverso, P.: Strong planning under partial observability. Artif. Intell. 170(4), 337–384 (2006). https://doi.org/ 10.1016/j.artint.2006.01.004. https://www.sciencedirect.com/science/article/pii/ S0004370206000075 4. Bit-Monnot, A., Ghallab, M., Ingrand, F., Smith, D.E.: FAPE: a Constraintbased Planner for Generative and Hierarchical Temporal Planning. arXiv preprint arXiv:2010.13121 (2020) 5. Blum, A.L., Furst, M.L.: Fast planning through planning graph analysis. Artif. Intell. 90(1–2), 281–300 (1997) 6. Cashmore, M., et al.: ROSPlan: planning in the robot operating system. In: Proceedings of the Twenty-Fifth International Conference on International Conference on Automated Planning and Scheduling, ICAPS 2015, pp. 333–341. AAAI Press (2015) 7. Castillo, L., Fdez-Olivares, J., Garc´ıa-P´erez, O., Palao, F.: Efficiently handling temporal knowledge in an HTN planner. In: Proceedings of the Sixteenth International Conference on International Conference on Automated Planning and Scheduling, ICAPS 2006, pp. 63–72. AAAI Press (2006) 8. Cesta, A., Cortellessa, G., Fratini, S., Oddi, A.: Developing an end-to-end planning application from a timeline representation framework. In: IAAI 2009, Proceedings of the 21st Innovative Applications of Artificial Intelligence Conference, Pasadena, CA, USA, pp. 66–71 (2009) 9. Cesta, A., Oddi, A.: Gaining efficiency and flexibility in the simple temporal problem. In: Chittaro, L., Goodwin, S., Hamilton, H., Montanari, A. (eds.) Proceedings of the Third International Workshop on Temporal Representation and Reasoning (TIME 1996), pp. 45–50. IEEE Computer Society Press, Los Alamitos (1996) 10. Cesta, A., Oddi, A., Smith, S.F.: A constraint-based method for project scheduling with time windows. J. Heuristics 8(1), 109–136 (2002). https://doi.org/10.1023/ A:1013617802515 11. Chien, S., Tran, D., Rabideau, G., Schaffer, S., Mandl, D., Frye, S.: Timeline-based space operations scheduling with external constraints. In: ICAPS 2010, Proceedings of the 20th International Conference on Automated Planning and Scheduling, pp. 34–41 (2010) R. De Benedictis et al. 12. Cialdea Mayer, M., Orlandini, A., Umbrico, A.: Planning and execution with flexible timelines: a formal account. Acta Informatica 53(6), 649–680 (2016). https:// doi.org/10.1007/s00236-015-0252-z 13. De Benedictis, R., Cesta, A.: Lifted heuristics for timeline-based planning. In: ECAI-2020, 24th European Conference on Artificial Intelligence, pp. 498–2337. Santiago de Compostela, Spain (2020) 14. Dechter, R.: Constraint Processing. Elsevier Morgan Kaufmann, Cambridge (2003) 15. Dvor´ ak, F., Bit-Monnot, A., Ingrand, F., Ghallab, M.: Plan-space hierarchical planning with the action notation modeling language. In: IEEE International Conference on Tools with Artificial Intelligence (ICTAI), Limassol, Cyprus (2014). https://hal.archives-ouvertes.fr/hal-01138105 16. Fox, M., Gerevini, A., Long, D., Serina, I.: Plan stability: replanning versus plan repair. In: Long, D., Smith, S.F., Borrajo, D., McCluskey, L. (eds.) Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling, ICAPS 2006, Cumbria, UK, 6–10 June 2006, pp. 212–221. AAAI (2006). http:// www.aaai.org/Library/ICAPS/2006/icaps06-022.php 17. Fox, M., Long, D.: PDDL2.1: an extension to PDDL for expressing temporal planning domains. J. Artif. Intell. Res. 20, 61–124 (2003) 18. Frank, J., J´ onsson, A.K.: Constraint-based attribute and interval planning. Constraints 8 (4), 339–364 (2003) 19. Fratini, S., Pecora, F., Cesta, A.: Unifying planning and scheduling as timelines in a component-based perspective. Arch. Control Sci. 18(2), 231–271 (2008) 20. Fratini, S., Cesta, A., De Benedictis, R., Orlandini, A., Rasconi, R.: APSI-based deliberation in goal oriented autonomous controllers. In: ASTRA 2011 (2011) 21. Gat, E.: On three-layer architectures. In: Artificial Intelligence and Mobile Robots, pp. 195–210. AAAI Press (1997) 22. Gerevini, A., Serina, I.: Fast plan adaptation through planning graphs: local and systematic search techniques. In: Chien, S.A., Kambhampati, S., Knoblock, C.A. (eds.) Proceedings of the Fifth International Conference on Artificial Intelligence Planning Systems, Breckenridge, CO, USA, 14–17 April 2000, pp. 112–121. AAAI (2000). http://www.aaai.org/Library/AIPS/2000/aips00-012.php 23. Ghallab, M., Laruelle, H.: Representation and control in IxTeT, a temporal planner. In: AIPS 1994, Proceedings of the 2nd International Conference on AI Planning and Scheduling, pp. 61–67 (1994) 24. Ghallab, M., Nau, D., Traverso, P.: Automated Planning: Theory and Practice. Morgan Kaufmann Publishers Inc., Burlington (2004) 25. Ingrand, F., Ghallab, M.: Robotics and artificial intelligence: a perspective on deliberation functions. AI Commun. 27(1), 63–80 (2014). https://doi.org/10.3233/ AIC-130578. https:// hal.archives-ouvertes.fr/hal-01138117 26. Jonsson, A., Morris, P., Muscettola, N., Rajan, K., Smith, B.: Planning in interplanetary space: theory and practice. In: AIPS 2000, Proceedings of the Fifth International Conference on AI Planning and Scheduling, pp. 177–186 (2000) 27. Kautz, H., Selman, B.: Planning as satisfiability. In: ECAI, vol. 92, pp. 359–363 (1992) 28. van der Krogt, R., de Weerdt, M.: Plan repair as an extension of planning. In: Biundo, S., Myers, K.L., Rajan, K. (eds.) Proceedings of the Fifteenth International Conference on Automated Planning and Scheduling (ICAPS 2005), 5–10 June 2005, Monterey, California, USA, pp. 161–170. AAAI (2005). http://www. aaai.org/Library/ICAPS/2005/icaps05-017.php 29. Laborie, P.: Algorithms for propagating resource constraints in AI planning and scheduling: existing approaches and new results. Artif. Intell. 143, 151–188 (2003) Incremental Timeline-Based Planning 30. Laborie, P., Ghallab, M.: Planning with sharable resource constraints. In: Proceedings of the 14th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI 1995, pp. 1643–1649. Morgan Kaufmann Publishers Inc. (1995) 31. McGann, C., Py, F., Rajan, K., Thomas, H., Henthorn, R., Mcewen, R.: A deliberative architecture for AUV control. In: 2008 IEEE International Conference on Robotics and Automation, pp. 1049–1054. IEEE (2008) 32. Morris, P., Muscettola, N., Vidal, T.: Dynamic control of plans with temporal uncertainty. In: Proceedings of the 17th International Joint Conference on Artificial Intelligence - Volume 1, IJCAI 2001, pp. 494–499. Morgan Kaufmann Publishers Inc., San Francisco (2001) 33. Morris, P.H., Muscettola, N.: Temporal dynamic controllability revisited. In: Veloso, M.M., Kambhampati, S. (eds.) Proceedings, The Twentieth National Conference on Artificial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence Conference, 9–13 July 2005, Pittsburgh, Pennsylvania, USA, pp. 1193–1198. AAAI Press/The MIT Press (2005). http://www.aaai.org/Library/ AAAI/2005/aaai05-189.php 34. Muscettola, N.: HSTS: integrating planning and scheduling. In: Zweben, M., Fox, M.S. (ed.) Intelligent Scheduling. Morgan Kauffmann (1994) 35. Nau, D.S., et al.: SHOP2: an HTN planning system. J. Artif. Intell. Res. 20, 379– 404 (2003) 36. Nau, D.S., Ghallab, M., Traverso, P.: Blended planning and acting: Preliminary approach, research challenges. In: Bonet, B., Koenig, S. (eds.) Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 25–30 January 2015, Austin, Texas, USA, pp. 4047–4051. AAAI Press (2015) 37. Nebel, B., Koehler, J.: Plan reuse versus plan generation: a theoretical and empirical analysis. Artif. Intell. 76(1–2), 427–454 (1995) 38. Niemueller, T., Hofmann, T., Lakemeyer, G.: Goal reasoning in the CLIPS executive for integrated planning and execution. In: Proceedings of the International Conference on Automated Planning and Scheduling, vol. 29, no. 1, pp. 754–763 (2021) 39. Peot, M.A., Smith, D.E.: Conditional nonlinear planning. In: Proceedings of the First International Conference on Artificial Intelligence Planning Systems, pp. 189– 197. Morgan Kaufmann Publishers Inc., San Francisco (1992) 40. Saetti, A., Scala, E.: Optimising the stability in plan repair via compilation. In: Kumar, A., Thi´ebaux, S., Varakantham, P., Yeoh, W. (eds.) Proceedings of the Thirty-Second International Conference on Automated Planning and Scheduling, ICAPS 2022, Singapore (virtual), 13–24 June 2022, pp. 316–320. AAAI Press (2022) 41. Scala, E., Torasso, P.: Deordering and numeric macro actions for plan repair. In: Yang, Q., Wooldridge, M.J. (eds.) Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, 25–31 July 2015, pp. 1673–1681. AAAI Press (2015) 42. Smith, D.E., Frank, J., Cushing, W.: The ANML language. In: ICAPS Workshop on Knowledge Engineering for Planning and Scheduling (KEPS) (2008) 43. Smith, D.E., Frank, J., J´ onsson, A.K.: Bridging the gap between planning and scheduling. Knowl. Eng. Rev. 15(1), 47–83 (2000) 44. Stock, S., Mansouri, M., Pecora, F., Hertzberg, J.: Hierarchical hybrid planning in a mobile service robot. In: KI 2015 Proceedings, pp. 309–315 (2015) 45. Umbrico, A., Cesta, A., Cialdea Mayer, M., Orlandini, A.: Platinum: a new framework for planning and acting. In: AI*IA 2017 Proceedings, pp. 498–512 (2017) R. De Benedictis et al. 46. Verfaillie, G., Pralet, C., Lemaˆıtre, M.: How to model planning and scheduling problems using constraint networks on timelines. Knowl. Eng. Rev. 25(3), 319– 336 (2010) 47. Weld, D.S.: An introduction to least commitment planning. AI Mag. 15(4), 27–61 (1994) 48. Weld, D.S., Anderson, C.R., Smith, D.E.: Extending graphplan to handle uncertainty and sensing actions. In: Proceedings of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative Applications of Artificial Intelligence, AAAI 1998/IAAI 1998, pp. 897–904. American Association for Artificial Intelligence, USA (1998) 49. Wilkins, D.E.: Practical Planning: Extending the Classical AI Planning Paradigm. Morgan Kaufmann Publishers, San Mateo (1988) 50. Zavatteri, M., Vigan` o, L.: Conditional simple temporal networks with uncertainty and decisions. Theor. Comput. Sci. 797, 77–101 (2019). https://doi. org/10.1016/j.tcs.2018.09.023. https://www.sciencedirect.com/science/article/pii/ S0304397518305942. Temporal Representation and Reasoning (TIME 2017) Knowledge Acquisition and Completion for Long-Term Human-Robot Interactions Using Knowledge Graph Embedding Ermanno Bartoli2 , Francesco Argenziano1(B) , Vincenzo Suriani1 , and Daniele Nardi1 1 Department of Computer, Control, and Management Engineering, Sapienza University of Rome, Rome, Italy {argenziano,suriani,nardi}@diag.uniroma1.it 2 Division of RPL (Robotics, Perception and Learning), KTH Royal Institute of Technology, Stockholm, Sweden [emailprotected] Abstract. In Human-Robot Interaction (HRI) systems, a challenging task is sharing the representation of the operational environment, fusing symbolic knowledge and perceptions, between users and robots. With the existing HRI pipelines, users can teach the robots some concepts to increase their knowledge base. Unfortunately, the data coming from the users are usually not enough dense for building a consistent representation. Furthermore, the existing approaches are not able to incrementally build up their knowledge base, which is very important when robots have to deal with dynamic contexts. To this end, we propose an architecture to gather data from users and environments in long-runs of continual learning. We adopt Knowledge Graph Embedding techniques to generalize the acquired information with the goal of incrementally extending the robot’s inner representation of the environment. We evaluate the performance of the overall continual learning architecture by measuring the capabilities of the robot of learning entities and relations coming from unknown contexts through a series of incremental learning sessions. Keywords: Human-robot interaction · Knowledge graphs · Knowledge graphs embeddings · Continual learning · Robots Knowledge base · Knowledge representation In the last years, robots started leaving laboratories to enter our daily environments where they are asked to autonomously operate, often sharing the working area with humans. To be effective in this goal, representing and storing information in a suitable way is fundamental regardless of the specific robotic applications. In particular, this problem acquires more relevance when designing Human-Robot Interaction (HRI) systems, since there is the intrinsic need to E. Bartoli and F. Argenziano—These two authors contributed equally. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 241–253, 2023. https://doi.org/10.1007/978-3-031-27181-6_17 E. Bartoli et al. Fig. 1. Complete architecture of the system: from the interaction with the user to the deployment of learned knowledge and capabilities after long-run training. make the human and the robot participants interact with each other. In order to make this interaction successful, the robot and the human not only must be able to communicate and understand each other, but also they should have a mutual understanding of the world they both operate in. Therefore, a shared semantic of the environment is needed in order to make the interaction successful. In many HRI applications, this knowledge (that is the building block on which the whole system is built) is often embedded in the agent’s behaviour and changes from one episode to another. A way to improve it can be through a generalization of the knowledge that is transferred and acquired by the robot. In fact, usually, it is very domain-dependent for the specific application of the system (Fig. 1). In this paper, we propose a novel architecture for acquiring knowledge from sparse data acquisition from environments. The acquired knowledge is represented and organized to improve the completeness of the previous knowledge base of the robot. This process leads to the creation of a resulting more extensive knowledge base that is built up incrementally. The nature of the architecture is meant to be robust to any change in the context so that it can be suitable in several HRI applications, even if very different from each other. A major advantage of the proposed approach is that, differently from previous HRI systems, it is not necessary to modify the software architecture when the context of the interaction changes, but it is only needed to start a new learning session that shapes the existing learning skills of the robot. The acquisition of the data is human-driven, and the human who collaborates with the robot is not required to know anything about the software of the agent, nor how the knowledge is represented, but the user just needs to share his knowledge of the world with the robot. This process needs to take into account some aspects. First of all, this kind of interaction is not defined over a short period of time, long-runs are necessary to achieve good results. However, long-runs are not that common in the HRI field, since the interactions between humans and robots happen quite fast, and therefore this problem must be treated. Moreover, because of these long-runs, the robot will face information that needs to be stored and effectively processed, without forgetting acquired knowledge as the run goes on. To solve these problems, the methodology we propose relies on Continual Learning (CL) and Knowledge Graph Embeddings (KGEs): the former is used to deal with the catastrophic forgetting phenomenon during incremental knowledge acquisition sessions, while the latter is used to efficiently use the information, stored in a Knowledge Acquisition and Completion for Long-Term HRI Fig. 2. Interaction with the robot before the long-run training and knowledge acquisition. The robot still has difficulties in carrying on a correct interaction. Fig. 3. Interaction with the robot after the long-run training and knowledge acquisition. The robot has improved its capabilities, can correctly carry on the interaction, and exploits it to learn new Knowledge Graph (KG) database, to perform the knowledge completion. In the end, the knowledge of the system spans from grounded facts about the environment to more general concepts on which the system can make predictions. This knowledge allows for several reasoning processes, based on the kind of query that the human operator may ask: if the query is very specific (namely the human asks for a particular object in a particular location), the robot can answer by exploiting its experience, that is what it has detected in the past explorations; for more general queries (namely, general objects or concepts), the robot can answer by making predictions depending on what it has learned, so by using an ontological scheme of the environment that it has slowly built in the past days. Related Work In order to have robots working and acting in human-shaped environments, semantic mapping approaches have been studied, aiming at constructing a common representation of the world between robots and humans [11]. To this end, there was a growing need of representing the knowledge of the robot with appropriate techniques in order to allow for faster and more precise reasoning about E. Bartoli et al. the environment the agents lived in. One particular way of knowledge representation that is demonstrated to be very effective is through triples [12], in which objects of the worlds are linked together by some sort of relation. This way of memorizing facts enabled the usage of a particular kind of data structure, the Knowledge Graphs (KGs) [4], in which is it possible to represent collections of triples as directed graphs. In those graphs, objects and entities are represented as nodes, and relations between entities are represented as directed edges. This representation allows for better data integration, unification, and reuse of information since it is also easier to represent ontological structures by the use of them. However, one of the biggest problems of KGs is that they do not scale well with size: the bigger the graph, the harder is to navigate through it and the harder is to make any sort of inference from it. For this reason, instead of working directly with KGs, through the years techniques of Knowledge Graph Embeddings (KGEs) [15] have been developed, in which KGs are transformed into lower-dimensional representation in order to reduce the number of parameters of the system while also preserving the information of the graph. Another problem in representing information with KGs is that when knowledge comes from multiple sources, there is often the possibility of incorporating contradictory pieces of information that will eventually compromise the quality of the system (in particular during the training of the embedding). For this reason, it is important to introduce in the process of knowledge acquisition some sort of validation procedure, and this validation can be done by interacting with humans. In recent years, the human participant in the interaction has acquired a bigger and bigger role in the robot’s acquisition of knowledge from the world [3,13], and this is because through the filtering process of a human we are able to transfer to the robot only useful information, that can significantly improve further reasoning processes down the interaction pipeline. Although the human can get rid of useless information, a human-drive acquisition of knowledge needs much time to be robust and efficient, because the data that the robot acquires through the human can be sparse and not cohesive. For that purpose, the development of systems capable to handle long-runs of one single experiment has become more popular [8]. This kind of experiment allows the robot to build up robust and dense knowledge. An interesting way to build up the robot’s knowledge is doing it incrementally through human-robot interaction. Such a class of problems has been addressed in applications focused on learning without forgetting [7]. These approaches typically operate in a task-based sequential learning setup. This formulation, which is rarely encountered in practical applications under this assumption, has been also studied in a task-free scenario [1]. The proposed approach aims at making the robot able to address the multirelational embedding problem while incrementally building up the robot’s knowledge base in a unique long-run. The goal mentioned can be subdivided into three subtasks which are addressed at the same time: acquiring data in collaboration Knowledge Acquisition and Completion for Long-Term HRI with the human, incorporating the acquired data in the infrastructure designed for semantic mapping, improving the accuracy of the robot’s predictions by training the model on the new data (Fig. 4). Fig. 4. The final task is the composition of three sub-tasks. Acquiring and Extending the Knowledge Base To properly build a knowledge base (KB) for the purpose of this work, we chose to have a basic predicate represented by a triple, (h, r, t), where h is the head entity, t is the tail entity, and r is the relation that connects the head to the tail. A set of those triples can be suitable for Continual Learning on Knowledge Graph Embedding. In fact, a dataset of triples can be easily split into learning sessions, each of them comprising a portion of the data. This can be used to simulate the fact that data are not all available at once, so in the training session n, only the n − th portion of the dataset is given to the model, and it trains itself only on those data. This procedure is valid, but it is assumed that even if the dataset is not given to the model entirely, it must be known in advance in order to be able to divide it. This is a huge constraint when dealing with real robots and real environments for two main reasons. The first is that, when the robot is put into an environment, the number and the type of the object in the environment are unknown. This means that the number of predicates that the robot collects when evolving in the environment, so the number of entities and relations of the robot’s knowledge base, can vary. The second reason is that also the number of tasks can vary. In fact, when the robot detects an unknown object, the system has to take care of a new entity but also a new task. The architecture will assign an embedding to the new entity and the next training will include also such an entity. From a conceptual point of view, the interaction between the robot and the human that cooperate in order to enlarge the knowledge base is shown in Fig. 5, on the left. In the context of Interactive Task Learning (ITL) [6], the E. Bartoli et al. Fig. 5. On the left, the process of acquiring meaningful information, composed by 3 phases: retrieving information (A), asking for correctness (B), and updating based on feedback (C). On the right, the workflow for a long-run execution. setup of our experiments aims at developing agents and systems that are focused on generality across different tasks. In fact, the main focus is the ability of the system to abstract concepts from different domains on a more general level. Our work, which exploits embedding algorithms on the triples of a KG, adopts these principles. The knowledge acquisition procedure consists of three different phases, that are chronologically consecutive. First, the objects detected using the YOLO Neural Network [14] come into the robot as simple labels, and the phase A starts. The robot queries its KB in order to retrieve the semantic meaning of the object detected. The semantic meaning could be also inaccurate: in fact, the more that entity appears in the KB, the more the embedding of that entity will be precise and, the predictions on that entity, more accurate. If there are not enough data that grant an accurate embedding of the entity, the predictions will be incorrect. The predictions are represented by the predicates (h, r, t) where the head entity is the detected object, the relation is chosen randomly among all the known relations, and the tail entity is the result of the prediction. After the generation of the predicates, the phase B starts. Here the robot asks the human for the correctness of the predicates by asking questions for each predicate. Communication is very important and it needs to be well-defined because misunderstanding could provoke incorrect answers that lead to the addition of wrong data to the KB. Since the data of the KB are not always human interpretable (“objInLoc” stands for “object is in Location”), according to the relation of the predicates, the question is generated so that it is human-understandable. As soon as the robot asks the question to the user, it waits for the user’s answer, and phase C starts. In this phase, the user can answer positively or negatively. If the user answers positively, it means that the robot’s prediction was correct, and the predicate (h, r, t) is a true fact, so it can be added to the KB. If the prediction is judged as false, the robot asks the user for the correct tail of the predicate (h, r, ?), where h and r are the same head entity and relation as before. Once the Knowledge Acquisition and Completion for Long-Term HRI user answers the robot with the correct tail entity, a new predicate is created, and it is added to the KB. In the end, both a correct prediction and an incorrect prediction lead to an addition of a true predicate in the KB. Moreover, when the robot adds the predicate to its KB, it provides an implicit consensus to the user. In this way, the user is able to know which predicate is being added to the knowledge base, and if there is an error, the user can recover from it. 3.2 Knowledge Graph Embedding In order to predict new predicates, we adopted the Knowledge Graph Embedding (KGE) technique, which uses supervised learning models capable to learn vector representations of nodes and edges. By definition, the objective of Knowledge Graph Embedding problem is to learn a continuous vector representation of a Knowledge Graph G which encodes vertices that represent entities E as a set of vectors vE ∈ R|E|×dE , where dE is the dimension of the vector of entities E, and as a set of edges which represent relations R as mappings between vectors WR ∈ R|R|×dR , where dR is the dimension of the vector of relations. The knowledge graph G is composed by triples (h, r, t), where h, t ∈ E are the head and tail of the relations, while r ∈ R is the relation itself. One example of such a triple is (bottle, hasMaterial, plastic). In literature, there are numerous ways of embedding the knowledge in a knowledge graph: transitional models, rotational models, gaussian models, and many others. However, independently on what is the class of methods that are used, the embedding is learned by minimizing the loss L computed on a scoring function f (h, r, t) over the set of triples in the knowledge graph, and over the set of negative triples that are generated by negative sampling over the same graph. For this research, the embedding model we used is ANALOGY that represents a relation as a matrix. This model can cope with asymmetrical relations and imposes the structure of the matrix to be a diagonal-block matrix, to minimize the number of parameters that need to be stored by the system. ANALOGY. In the field of KGEs, there are many numerous ways of representing the relations into lower dimensional spaces. Usually, these techniques are grouped in families of models that describe the general principle that makes the embedding of the information possible. For instance, translational models (like TransE [2]) represent relationships as translations in the embedding space, while Gaussian embeddings model also takes the uncertainty of the information contained in a KG. Despite these models being simpler than other models, they fail to correctly represent more complex kinds of relations (like symmetrical relations), and so more advanced models are needed. For this reason, we chose ANALOGY as our KGE model. ANALOGY is an improvement of the RESCAL [9] model that is a tensor factorization approach able to perform collective training on multi-relational data. In this approach, a triple (h.r.t) is represented as an entry in a three-way tensor X . A tensor entry Xijk = 1 means that the triple composed by the i-th and the k-th entity as, respectively, head and tail, and the E. Bartoli et al. j-th relation is a true fact. Otherwise, unknown or non-existing facts have their entry set to 0. Each slice Xk of the tensor is then factorized as Xk ≈ ARk AT , where A is a matrix that contains the latent-component representation of the entities, while instead Rk is a matrix that models the interactions of the latent components, and both are computed by solving the minimization problem min f (A, Rk ) + g (A, Rk ) where 1 f (A, Rk ) = 2 Xk − ARk AT 2 F and g is a regularization term 1 2 2 Rk F g (A, Rk ) = λ AF + 2 Starting from this approach, ANALOGY makes some important improvements: it constrains R to be a diagonal matrix (like DistMult), and it introduces ¯ T (like complex-valued embeddings to cope with asymmetric relations X = EW E ComplEx does), but most importantly it imposes analogical structures among the representations by the means of a diagonal-block matrix (reducing the number of parameters needed by the model) by modifying the objective function as follows minv,W Es,r,o,y∼D (φv,W (s, r, o), y) s.t. Wr Wr = Wr Wr ∀r ∈ R (4) Wr Wr = Wr Wr ∀r, r ∈ R 3.3 The process described is robust, because allows a robot that is put in a completely unknown environment, to incrementally build a robust knowledge of it. A completely unknown environment means that no entity or relation is present in the KB of the robot at the beginning. Moreover, one of the advantages of this approach is that some knowledge could be transferred to the robot. For example, it is possible to exploit existing knowledge graph databases to give some a-priori knowledge to the robot. In this way, the robot will learn to build up its KB much faster. During this process, the KB of the robot evolves in the environment, acquiring information and communicating with the human. This approach is meant for designing a single long-run, instead of multiple short runs. Figure 5, on the right, shows the block scheme of such approach. The circular block, depicting the robot and the user, wraps all the infrastructure responsible for enlarging the KB and communicating with the human, which is shown in Fig. 5, on the left. The two blocks, i.e. exploration and training, are mutually exclusive. These 2 blocks are called whether or not a condition is verified. There are three different conditions that have been implemented. The Knowledge Acquisition and Completion for Long-Term HRI Table 1. HITS@10 of ANALOGY with standard settings sess 0 sess 1 sess 2 sess 3 sess 4 sess 5 classical context on ai2thor 5 - classical context on ai2thor 4 - 0.238 0.764 classical context on ai2thor 3 - 0.346 0.336 0.676 classical context on ai2thor 2 - 0.382 0.371 0.389 0.647 classical context on ai2thor 1 - 0.402 0.385 0.361 0.380 0.558 classical context on ai2thor 0 0.339 0.355 0.343 0.343 0.336 0.500 Table 2. MRR of ANALOGY with standard settings sess 0 sess 1 sess 2 sess 3 sess 4 sess 5 classical context on ai2thor 5 - classical context on ai2thor 4 - 0.104 0.385 classical context on ai2thor 3 - 0.129 0.128 0.338 classical context on ai2thor 2 - 0.136 0.127 0.130 0.322 classical context on ai2thor 1 - 0.153 0.146 0.141 0.146 0.270 classical context on ai2thor 0 0.151 0.134 0.134 0.130 0.130 0.198 first (shown in Fig. 5, on the right) deals with the amount of data collected by the robot during the exploring phase. This kind of condition makes it possible that at each learning session the robot collects the same amount of data, so the dataset will always be balanced. The second condition deals with the battery level of the robot. With this condition, the robot is free to explore the environment until the battery goes under a certain threshold, so the robot comes back to its docking station and, while recharging, it performs a training session. The final condition only includes time. Two periods, namely day and night, are defined. In the first one, the robot is in exploration, while in the latter, the robot is in training. In the evaluation of the presented work, we would like to capture the capability of the robot to exploits its knowledge during the process of learning whatever the human teaches to it. The learning procedure is built so as to recognize the entities in a certain environment, also to learn the relations between these entities, and predict them even when they are not explicitly mentioned by the human. The first thing that we want to prove is that models based on the standard learning process tend to forget what they have learned when new things to learn come. In order to prove this, we have simulated with the TIAGo robot a situation in which it E. Bartoli et al. Fig. 6. The Loss during the learning sessions: 0, 2, 4, 5. (The last one, in blue, represents the training considering the last subset of the data acquired through the proposed methodology). This shows that the trend is constantly decreasing. (Color figure online) learns from the human some information belonging to a certain context, and then it is asked to learn other information from a different context. From a technical point of view, this experiment consists of training the robot over 6 learning sessions, using a dataset structure that is inspired by AI2THOR [5]. In the first one, the dataset sess 5 ai2thor has been taken as input. Instead, for the subsequent 5 learning sessions, the dataset sess i ai2thor with i ∈ {0, 1, 2, 3, 4} has been used. In particular, the dataset sess 5 ai2thor has been created by the robot through the methodology described. Moreover, the model used for this experiment is ANALOGY, and it has been developed in “classical context” which means that it has not been made suitable for continual learning, but it is such as the standard model for KGEs. The results of this experiment are showed in Tables 1 and 2. The two tables show the performances of the model in terms of HITS@10 (Hits at 10) and MRR (Mean Reciprocal Rank). The 2 metrics HITS@10 and MRR are defined as follows: |Q| 1 1 M RR = |Q| i=1 rank(s,p,o)i Hits@N = 1 if rank(s,p,o)i ≤ 10 Each table must be read from the top to the bottom because the order is chronological. In each row, there is the performance of the model (trained on the subset i of the dataset) with respect to the other subsets. The first row of Table 1 for instance, shows the HITS@10 of ANALOGY which has been trained on sess 5 ai2thor. Since it has only been trained on that subset of the dataset, it has been evaluated only on sess 5 ai2thor. The row “classical context on 2 ai2thor”, shows the HITS@10 of ANALOGY which has been trained on sess 5 ai2thor, sess 4 ai2thor, sess 3 ai2thor (previously), and sess 2 ai2thor (currently). It means that can be evaluated on the subset sess i ai2thor where i ∈ {2, 3, 4, 5}. The model comes across the catastrophic forgetting phenomenon because, the more it trains on subsets sess i ai2thor where i ∈ {4, 3, 2, 1, 0} which contain the same entities and relations, the less it is precise on HITS@10 on sess 5 ai2thor whose data are unseen for all the subsequent learning sessions. Knowledge Acquisition and Completion for Long-Term HRI Fig. 7. The MRR during the learning sessions: 0, 3, 5 (the last one, in blue, represents the training considering the last subset of the data acquired through the proposed methodology). The graph on the left compare the MRR of last learning session with the average MRR among all the previous learning sessions. (Color figure online) Fig. 8. The HITS@10 function during the learning sessions: 0, 3, 5 (the last one, depicted in blue, represents the training considering the last subset of the data acquired through the proposed methodology). The graph on the left compares the HITS@10 of the last learning session with the average MRR among all the previous learning sessions. (Color figure online) For the next experiment, the model ANALOGY is considered only with continuous context, because it proves efficient for the problem of catastrophic forgetting. The same dataset considered previously has been used, i.e. sess 5 ai2thor with i ∈ {0, 1, 2, 3, 4, 5}, where the partitions i ∈ {0, 1, 2, 3, 4} are composed by the same types of entities, while the partition i = 5 consists mostly of new entities. In Fig. 6 are shown 5 different graphs, representing the trend of the loss function for each learning session. By only looking at these graphs, there are some elements that are very important. First, the trend of the loss is always decreasing. The most decreasing shape is reached in the first forty epochs of each learning session. Since in each learning session there is a limited amount of data, after some epochs the trend is quite stable, and the model is no longer improving. Here comes the “early stopping”, which is set with a patience = 50, that stops the training for that learning session and starts the next learning session. Although the entities are almost the same in each learning session, the predicates are different, and for this reason, at the beginning of each learning session the loss is pretty high, but then it decreases. The overall trend of the loss decreases learning session by learning session. The loss function is an important metric for checking if the model is learning or not, but is not significant if considered alone, in fact Fig. 7 and Fig. 8 show the graphs of the two metrics considered for the evaluation of the models, which are MRR and HITS@10. The increasing learning skills are confirmed by the E. Bartoli et al. graphs of MRR and HITS. The model, in fact, is not only evaluated on the nth portion of the dataset given in input for training but all portions of the data are considered in the evaluation. Hence, if good performances were expected when evaluating the current portion of the dataset (see MRR/DevSess 5 in Fig. 7 and HITS/DevSess 5 in Fig. 8), it was not sure that it was also for the previous ones. The results showed a remarkable ability to not forget what is learned, and it is visible in MRR/DevSess i with i ∈ {0, 1, 2, 3, 4} and HITS/DevSess i with i ∈ {0, 1, 2, 3, 4}. In these graphs, the performance of the last learning session is marked with the color blue. Both for MRR and for HITS the performances of the last learning session (represented in blue color) are not worse than the performance of the model at the previous learning session (depicted in red). Finally, when evaluating performances, it might be worth considering also if they are affected by performative effects [10]. These phenomena have always been present in several fields like statistical decision theory and causal reasoning, but in the last years, they have been brought to attention also in the deep learning field. They can occur when predictions may influence the outcomes they are trying to predict, causing a distribution shift of the obtained results. It has been observed that these effects are reduced if multiple re-training procedures are performed. In the present work, we proposed a re-training procedure at the end of each learning session. This operation would reduce such distribution shifts. A video representing a key result of this work can be found in the following link: https://www.youtube.com/watch?v=vQbyn7hs8 4. It shows, through some snapshots of the video, the process of enlarging the knowledge base of the robot, thanks to the interaction with the human. With this procedure, entities that were first unknown, become part of the knowledge of the robot. Conclusions and Future Directions In this work, we show (as in Fig. 2 and 3) the ability of the robot to learn from unknown environments, relying on the answers of the human. Thanks to the proposed architecture, the robot uses Knowledge Graph Embedding techniques to generalize the acquired information with the goal of incrementally extending its inner representation of the environment. We evaluate the performance of the overall architecture by measuring the capabilities of the robot of learning entities and relations coming from unknown contexts through a series of incremental learning sessions, demonstrating the ability of the presented architecture to cope with the catastrophic forgetting phenomenon. For example, at the beginning of the experiments, the robot is unable to find any meaningful information of an unknown detected object, if it has been never encountered before. After some learning sessions, it has become able to retrieve accurate information about it. The learning process of the robot is human-driven, and the human is no more required to be an expert. This allows the application of the system in many dynamic scenarios when a robot needs to learn information about the operating environment. Despite the data that drive the learning being sparse and unbalanced, the designed architecture allows the learning curve to converge quickly. Knowledge Acquisition and Completion for Long-Term HRI The whole architecture, in addition to these improvements, would make the interactions between humans and robots more natural, making a further step toward the creation of systems that can handle long interactions with humans in an environment whose knowledge of it is incrementally built during the interaction, and it is not needed to give it in advance to the robot. References 1. Aljundi, R., Kelchtermans, K., Tuytelaars, T.: Task-free continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11254–11263 (2019) 2. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, vol. 26 (2013) 3. Gemignani, G., Capobianco, R., Bastianelli, E., Bloisi, D.D., Iocchi, L., Nardi, D.: Living with robots: interactive environmental knowledge acquisition. Robot. Auton. Syst. 78, 1–16 (2016) 4. Ji, S., Pan, S., Cambria, E., Marttinen, P., Philip, S.Y.: A survey on knowledge graphs: representation, acquisition, and applications. IEEE Trans. Neural Netw. Learn. Syst. 33(2), 494–514 (2021) 5. Kolve, E., et al.: AI2-THOR: an interactive 3D environment for visual AI. arXiv preprint arXiv:1712.05474 (2017) 6. Laird, J.E., et al.: Interactive task learning. IEEE Intell. Syst. 32(4), 6–21 (2017) 7. Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2018). https://doi.org/10.1109/TPAMI.2017.2773081 8. Lindblom, J., Andreasson, R.: Current challenges for UX evaluation of humanrobot interaction. In: Schlick, C., Trzcieli´ nski, S. (eds.) Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future, pp. 267–277. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41697-7 24 9. Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multi-relational data. In: ICML (2011) 10. Perdomo, J., Zrnic, T., Mendler-D¨ unner, C., Hardt, M.: Performative prediction. In: International Conference on Machine Learning, pp. 7599–7609. PMLR (2020) 11. Pronobis, A.: Semantic mapping with mobile robots. Ph.D. thesis, KTH Royal Institute of Technology (2011) 12. Pronobis, A., Jensfelt, P.: Large-scale semantic mapping and reasoning with heterogeneous modalities. In: 2012 IEEE International Conference on Robotics and Automation, pp. 3515–3522. IEEE (2012) 13. Randelli, G., Bonanni, T.M., Iocchi, L., Nardi, D.: Knowledge acquisition through human-robot multimodal interaction. Intel. Serv. Robot. 6(1), 19–31 (2013) 14. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018) 15. Wang, Q., Mao, Z., Wang, B., Guo, L.: Knowledge graph embedding: a survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 29(12), 2724–2743 (2017) Construct, Merge, Solve and Adapt Applied to a Bus Driver Scheduling Problem with Complex Break Constraints Roberto Maria Rosati1(B) , Lucas Kletzander2 , Christian Blum3 , Nysret Musliu2 , and Andrea Schaerf1 1 DPIA, University of Udine, via delle Scienze 206, 33100 Udine, Italy {robertomaria.rosati,andrea.schaerf}@uniud.it Christian Doppler Laboratory for Artificial Intelligence and Optimization for Planning and Scheduling, DBAI, TU Wien, Vienna, Austria {lucas.kletzander,nysret.musliu}@tuwien.ac.at 3 Artificial Intelligence Research Institute (IIIA-CSIC), Campus of the UAB, Bellaterra, Spain Abstract. Bus Driver Scheduling (BDS) is a combinatorial optimization problem that consists in assigning atomic driving duties (legs) belonging to predetermined routes to bus drivers. We consider the highlyconstrained, real-world version of the problem proposed by Kletzander and Musliu (2020), with complex break rules specified by a collective agreement and public regulation. We propose a Construct, Merge, Solve and Adapt (CMSA) algorithm, which is a recent metaheuristic proposed by Blum et al. (2016) based on the idea of problem instance reduction. At each iteration of the algorithm, sub-instances of the original instance are solved by an exact solver. These sub-instances are obtained by merging the components of the solutions generated by a probabilistic greedy algorithm. We compare our method with the state-of-the-art approaches on the benchmark instances. The results show that CMSA compares favourably with other metaheuristics on most instances and with exact techniques on large ones. Keywords: Bus driver scheduling CMSA · Metaheuristics · Optimization · Driver scheduling problems are complex combinatorial problems that integrate the scheduling part with routing issues, due to the fact that drivers and vehicles get moved to different locations by their duties. Different driver scheduling problems have been proposed in the literature, differing among themselves mainly depending on the type of vehicles that are involved and constraints. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 254–267, 2023. https://doi.org/10.1007/978-3-031-27181-6_18 Construct, Merge, Solve and Adapt for Bus Driver Scheduling We consider here a Bus Driver Scheduling (BDS) problem, which is characterized by the fact that the atomic driving duties (called legs) are short compared to other vehicles (e.g., planes or trains). Therefore, the daily shift of a driver is composed of a relatively large number of independent legs, which must be assembled in a working shift respecting various regulations mainly connected to safety issues. We focus on the specific BDS formulation proposed by [13], which arises from a public transportation setting in Austria and is subject to many constraints related to rest time (breaks) regulated by legal requirements and collective agreements. This formulation comes with a challenging dataset composed of many realistic instances, which has already been used in the experimental analysis of a few exact and metaheuristic techniques [12,13,15]. We propose for this problem a Construct, Merge, Solve and Adapt (CMSA) approach, which is a metaheuristic technique recently proposed by [3], and applied to a variety of combinatorial problems [9,16,22]. Additionally, we have been able to reuse a greedy algorithm developed in a previous work [13], that we suitably randomized in order to employ it for the generation of solutions within the CMSA algorithm. For our CMSA solver, we performed a principled tuning procedure in order to obtain the best configuration of the parameters and we compared our tuned solver with the best results from the literature. The outcome is that our solver is able to improve the state-of-the-art results for a range of problem instances, in particular for the large ones. Problem Description The investigated Bus Driver Scheduling problem deals with the assignment of bus drivers to vehicles that already have a predetermined route for one day of operation, according to the rules specified by an Austrian collective agreement. We use the same specification as presented in previous work [13], where the reader can find a more detailed description of the problem. 2.1 Problem Input The bus routes are given as a set L of individual bus legs, each leg ∈ L is associated with a tour tour (corresponding to a particular vehicle), a start time start , an end time end , a starting position startPos , and an end position endPos . The actual driving time for the leg is denoted by drive . The benchmark instances use drive = length = end − start . Table 1 shows a short example of one particular bus tour. The vehicle starts at time 360 (6:00 am) at position 0, does multiple legs with stops including waiting time at positions 1 and 2 and finally returns to position 0. A valid tour never has overlapping bus legs and consecutive bus legs satisfy endPos i = startPos i+1 . A tour change occurs when a driver has an assignment of two consecutive bus legs i and j with tour i = tour j . R. M. Rosati et al. Table 1. A bus tour example tour start end startPos endPos 1 1 A distance matrix specifies, for each pair of positions p and q, the time dp,q a driver takes to get from p to q when not actively driving a bus. If no transfer is possible, then dp,q = ∞. dp,q with p = q is called the passive ride time. dp,p represents the time it takes to switch tour at the same position, but is not considered passive ride time. Finally, each position p is associated with an amount of working time for starting a shift (startWork p ) and ending a shift (endWork p ) at that position. The instances in this paper use startWork p = 15 and endWork p = 10 at the depot (p = 0), to take into account the time needed to enter and exit the depot. These values are 0 for other positions, given that the bus is already on the street. 2.2 A solution to the problem is an assignment of exactly one driver to each bus leg. Criteria for feasibility are: – No overlapping bus legs are assigned to any driver. – Changing tour or position between consecutive assignments i and j requires start j ≥ end i + dendPos i ,startPos j . – Each shift respects all hard constraints regarding work regulations as specified in the next section. 2.3 Work and Break Regulations Valid shifts for drivers are constrained by work regulations and require frequent breaks. First, different measures of time related to a shift s containing the set of bus legs Ls need to be distinguished, as visualized in Fig. 1: – The total amount of driving time: Ds = i∈Ls drive i – The span from the start of work until the end of work Ts with a maximum of Tmax = 14 h. – The working time Ws = Ts − unpaid s , not including certain unpaid breaks. Driving Time Regulations. The maximum driving time is restricted to Dmax = 9 h. The whole distance startj − endi between consecutive bus legs i and j qualifies as a driving break, including passive ride time. Breaks from driving need to be taken repeatedly after at most 4 h of driving time. In case a Construct, Merge, Solve and Adapt for Bus Driver Scheduling Fig. 1. Example shift Fig. 2. Rest break positioning driving break is split in several parts, all parts must occur before a driving block exceeds the 4-h limit. Once the required amount of break time is reached, a new driving block starts. The following options are possible: – One break of at least 30 min – Two breaks of at least 20 min each – Three breaks of at least 15 min each Working Time Regulations. The working time Ws has a hard maximum of Wmax = 10 h and a soft minimum of Wmin = 6.5 h. If the employee is working for a shorter period of time, the difference has to be paid anyway. The actual paid working time is Ws = max{Ws ; 390}. A minimum rest break is required according to the following options: – Ws < 6 h: no rest break – 6 h ≤ Ws ≤ 9 h: at least 30 min – Ws > 9 h: at least 45 min The rest break may be split into one part of at least 30 min and one or more parts of at least 15 min. The first part has to occur after at most 6 h of work. Note that a break can be a rest break and driving break simultaneously or just qualify as one of the two types. Whether rest breaks are paid or unpaid depends on break positions according to Fig. 2. Every period of at least 15 min of consecutive rest break is unpaid as long as it does not intersect the first 2 or the last 2 h of the shift (a longer rest break might be partially paid and partially unpaid). The maximum amount of unpaid rest is limited: R. M. Rosati et al. – If 30 consecutive minutes of rest break are located such that they do not intersect the first 3 h of the shift or the last 3 h of the shift, at most 1.5 h of unpaid rest are allowed. – Otherwise, at most one hour of unpaid rest is allowed. Rest breaks beyond this limit are paid. Shift Splits. If a rest break exceeds 3 hours, it is instead considered a shift split, which is unpaid and does not count towards Ws . However, such splits are typically regarded badly by the drivers. A shift split counts as a driving break, but does not contribute to rest breaks. 2.4 As argued in previous work [13], practical schedules must not consider only operating costs. The objective cost s = 2 · Ws + Ts + ride s + 30 · ch s + 180 · split s represents a linear combination of several criteria for shift s. The paid working time Ws is the main objective and it is combined with the total time Ts to reduce long unpaid periods for employees. The next sub-objectives reduce the passive ride time ride s and the number of tour changes ch s , which is beneficial for both employees and efficient schedules. The last objective aims to reduce the number of shift splits split s as they are very unpopular. Related Work Different variants of BDS have been studied from the early 60’s [27]. The BDS is often modelled as a Set Partitioning Problem and exact methods have been used in many publications to solve various variants of this problem [8,15,18, 23,25]. To solve very large real-world problems in a reasonable time, several metaheuristic methods have been studied for BDS. Such methods include Greedy approaches [20], Tabu Search [12,24], Simulated Annealing [13], GRASP [7], and Genetic Algorithms [17,19]. The problem definition of BDS is highly dependent on the country’s labour regulations, therefore, algorithms for other BDS variants cannot be used directly for the Austrian rules, which are more complex than most found in the literature. Previous work mostly focuses on cost only, sometimes including minimizing idle times and vehicle changes [6,11], but without considering the additional objectives for shift ergonomics that are considered for the BDS problem in this paper. Our problem variant has been introduced recently in the literature, and, to the best of our knowledge, the recently introduced exact approach based on branch and price [15], the metaheuristic approaches simulated annealing [13] and tabu search [12], as well as the application of problem-independent hyper-heuristics in Construct, Merge, Solve and Adapt for Bus Driver Scheduling combination with a set of problem-dependent low-level heuristics [14], represent the current state of the art for this problem. Although these approaches give very good results, the optimal solutions are still not known for most instances. Therefore, the investigation of new approaches is important for this problem. The CMSA Approach to the BDS Problem Construct, Merge, Solve and Adapt (CMSA) is a metaheuristic that was proposed recently in [3] and it is based on the idea of problem instance reduction [4]. At each iteration, the algorithm generates a number of solutions in a probabilistic way (Construct). The solution components found in these solutions are added to an initially empty sub-instance of the tackled problem instance (Merge). Then, an independent algorithm—typically an exact solver—is applied to the current sub-instance, in order to find the best, or possibly best, solution to the original problem instance that only contains solution components currently present in the sub-instance (Solve). Finally, the sub-instance is adapted according to the result of the independent algorithm, in such a way that those solution components that are frequently chosen by the independent algorithm are kept and those that are never used along a certain number of iterations are discarded (Adapt). The four phases are repeated in a loop until a certain stop criterion is met, where CPU running time is the most commonly employed. When the independent algorithm is a MIP solver like CPLEX or Gurobi, and this is the typical case for CMSA, the procedure can be said to be a matheuristic, because it envelopes an exact solver inside a metaheuristic procedure. 4.1 The CMSA Algorithm Our CMSA algorithm for the BDS Problem is based on the following main idea. Given the set of legs L = {1 ...n }, let S be the collection of all possible feasible bus shifts, where each shift s ∈ S is a sequence of legs that does not violate any of the constraints of the problem. A feasible solution is any collection of shifts φ ⊂ S such that every leg ∈ L belongs to one and only one shift s ∈ φ. Solution φ is a valid solution for the set partitioning problem on S. Let then ts ∈ {0, 1} be 1 if leg forms part of shift s, and 0 otherwise. Moreover, let cs be the cost of shift s, calculated according to the objectives explained in Sect. 2. If we were able to enumerate all shifts in S, the optimal solution of the BDS problem could be found by solving the following ILP model of the set partitioning problem to optimality. cs xs (2) min s.t. xs ts = 1 xs ∈ {0, 1} R. M. Rosati et al. Algorithm 1. CMSA for the BDS Problem 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: input: a set of legs L, values for nsols , drate , agelimit Φbsf ← ∅; S ← ∅ while CPU time limit not reached do for i ← 1,...,nsols do Φcur ← ProbabilisticGenerateSolution(L, drate ) if Φcur is better than Φbsf then Φbsf ← Φcur end if / S do for all s ∈ Φcur such that s ∈ S ←S ∪s age[s] ← 0 end for end for Φopt ← ApplyExactSolver(S ) if Φopt is better than Φbsf then Φbsf ← Φopt end if for all s ∈ S do if s ∈ Φopt then age[s] ← 0 else age[s] ← age[s] + 1 end if if age[s] > agelimit then S ← S \ s end if end for end while output: Φbsf This ILP model is based on a binary variable xs for each bus shift s ∈ S, whereby a value of xs = 1 means that shift s is chosen to be part of the solution. Moreover, constraints (3) ensure that each leg in L is present exactly once among the chosen bus shifts. In this way, all bus legs will be assigned to exactly one bus driver and no legs will be left uncovered. The objective (2) is to minimize the total cost, which is the sum of the costs cs of the shifts that belong to the solution. Nonetheless, in real-world instances, and in most instances proposed for this formulation, the cardinality of set S is too big for making the enumeration of the shifts a practical solution, and even the application of some efficient generation procedures, such as backtracking, would lead to ILP models that are too large to be solved in reasonable time with the current availability of memory and computational resources. However, we can use the above ILP model for solving reduced sub-instances S ⊂ S, as required by the solve phase of CMSA. Algorithm 1 provides the pseudo-code of our CMSA algorithm for the BDS problem. The CMSA takes as input the values for the following three parameters: – nsols , which fixes the number of solutions to be probabilistically generated by the construction procedure at each CMSA iteration. – drate , which guides the determinism rate in the solution construction procedure. – agelimit , which limits the number of iterations a solution component (shift) s can remain in the sub-instance S without being chosen by the exact solver. Note that the age of a solution component s is maintained in a variable age[s]. Construct, Merge, Solve and Adapt for Bus Driver Scheduling CMSA starts with the initialization of the best solution found so far, Φbsf , to an empty solution. Moreover, the sub-instance S is initialized to an empty set. The main loop of CMSA starts in line 3 of Algorithm 1. The four phases of CMSA take place, respectively: construct at line 5, merge at lines 7–10, solve at line 12 and adapt at lines 14–17. At each CMSA iteration, the construct and merge steps are repeated until nsols are generated and merged into S . The construction procedure, specifically, is called at line 5, and it consists in a probabilistic greedy heuristic for generating a solution Φcur to the original set L. The construction procedure uses a parameter drate to decide whether certain internal choices are performed in a deterministic or probabilistic way. Details on the heuristic procedure are given in Sect. 4.2. After the construction of every new solution, the corresponding merge step is performed in lines 7–10, that is, all those shifts s ∈ Φcur that are not yet present in sub-instance S are added to S , and their age values age[s] are initialized to zero. After generating and merging nsols solutions, the CMSA algorithm enters into the solve phase, which takes place at line 12. In our case, the ILP solver CPLEX 20.1 is applied in function ApplyExactSolver(S ). This is done by solving the ILP model stated at the beginning of this section after replacing all occurrences of S with S . We do not make use of a time limit for CPLEX, and the time limit for the solve phase is equal to the remaining CPU time budget. This implies that, apart from the last iteration of CMSA, when CPLEX may be capped by the time limit, the solution Φopt found in the solve phase is always the optimal one for the sub-instance S . Finally, in lines 14–17, the sub-instance is adapted. This adaptation comprises the following steps. First, the ages of shifts in Φopt are re-set to zero. Secondly, the age values of all remaining shifts from S are incremented by one. Finally, all shifts s ∈ S with age[s] > agelimit are removed from S . 4.2 Greedy Heuristic The greedy heuristic employed in the construction step of our CMSA, which is called at line 5 of Algorithm 1, is described in Algorithm 2. It is a revisited and randomized version of the greedy algorithm proposed in [13]. The procedure takes as input a value for parameter drate . The algorithm starts by sorting the legs, which is done at line 3 of Algorithm 2 in function ApplySorting. This subprocedure adds the legs—one by one—into a sorted sequence Lsorted , initially empty, choosing among those legs that have not been added to Lsorted yet. Every new entry is chosen according to the following criterion: with probability drate , the leg with the earliest start time is added to Lsorted . Otherwise—that is, with probability 1 − drate —a random leg is chosen. If drate is set to 1.0, legs in Lsorted are sorted according to their start time, as done in the original algorithm [13]. Then, beginning at line 4, the main loop of the algorithm takes place. The legs are explored in the order defined by Lsorted and each leg is inserted either in the shift that produces the least cost increase or a new shift is created, if the cost of the new shift containing solely is less than the least cost increase plus a certain threshold τ . Function SetThreshold chooses the value of τ as follows: with probability drate , τ is set to a fixed value of 500, while with probability 1 − drate , R. M. Rosati et al. Algorithm 2. Probabilistic greedy procedure 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: input: a set of legs L, value for drate Φcur ← ∅ Lsorted = ApplySorting(L, drate ) for all in Lsorted do sbest = argmins∈Φcur (cs∪{} − cs ) τ = SetThreshold(drate ) if c{} < csbest ∪{} − csbest + τ then Add new shift {} to Φcur else Add leg to shift sbest in Φcur end if for all = in Lsorted such that tour( ) = tour() do Add leg to shift sbest in Φcur if sbest ∪ { } is feasible end for Remove from Lsorted all legs added to shifts in Φcur at current iteration. end for output: Φcur a random number between 500 and 1000 is chosen uniformly. These bounds (500, respectively 1000) were selected according to problem-specific knowledge. After inserting a leg in an existing or in a new shift, the algorithm tries to perform all feasible additions of other legs that belong to the same tour of to that shift. This sub-procedure explores the legs by increasing start time, and it terminates at the first infeasible insertion or where no other legs with the same tours are left. The procedure ends when all legs from Lsorted have been added to the shifts in the solution Φcur . Experimental Results We tested the CMSA algorithm on the wide set of realistic instances available in the literature. Instances sizes range from 10 tours (about 70 legs) to 250 tours (at most 2500 legs). The instances with sizes 10–100 were released in [13], while the larger instances with sizes 150–250 were introduced later in [12]. We compare our CMSA with the state-of-the-art algorithms previously presented in the literature: Simulated Annealing (SA) and Hill Climbing (HC) [13], Tabu Search (TS) [12], and three hyper-heuristics using low-level heuristics proposed in [14]: Chuang-Pruning (CH-PR) [5], a combination of adaptive mechanisms to manage a set of active low-level heuristics (GIHH) [21], and a streamlined (lean) version of GIHH (L-GIHH) [1]. We compare the results also with the Branch and Price (B&P) developed by [15]. Construct, Merge, Solve and Adapt for Bus Driver Scheduling Table 2. CMSA parameters, the considered domains for parameter tuning, and the finally determined parameter values. Parameter 10–100 tours 150 − 250 tours Domain Value Domain Value {2, 3, ..., 500} 300 {2, 3, ..., 200} 66 [0.50, 1.00] [0.80, 1.00] {2, 3, ..., 50} {2, 3, ..., 30} We implemented the CMSA in C++ and compiled with GNU g++, version 9.4.0, on Ubuntu 20.04.4 LTS. The experiments were run on a machine equipped with an AMD Ryzen Threadripper PRO 3975WX processor with 32 cores, with a base clock frequency of 3.5 GHz, 64 GB of RAM. We allowed one core per experiment. The experiments for other algorithms were run on a different and slower machine, with a base clock frequency of 2.20 GHz and max frequency of 2.90 GHz. Although a completely fair comparison is not possible, for the abovementioned reasons and because the algorithms were not all implemented in the same programming language, experimental data presented in Sect. 5.2 clearly shows that CMSA is able to outperform other metaheuristics on most instance classes, even if the time limit for CMSA is kept much shorter than for other methods. 5.1 Parameters Tuning We tuned the values for parameters nsols , drate and clist through the automatic algorithm configuration tool json2run [26], which implements the F-RACE procedure [2]. The parameter space was sampled using a Hammersley point set [10]. We independently tuned the parameters for the instances with sizes from 10 to 100 tours and for the new larger instances, with sizes spread from 150 to 250 tours. Indeed, we had to allow smaller domains for the larger instances because combinations of high values of agelimit and nsols together with small drate are very likely to give birth to ILP models that are too large and that may saturate the memory during the solve phase. Parameters nsols and agelimit have domains of natural numbers, while drate takes real numbers with a precision of two decimal places. Table 2 shows the domains that we applied to the parameters and the different outcomes of the tuning procedures. 5.2 Analysis of the Results Table 3 shows the average results grouped by instance sizes for different methods. Each instance size class contains five distinct instances and we executed 10 independent runs on each instance, so that each value is calculated over 50 runs. The values for SA and HC are also taken over 10 runs per instance, while for R. M. Rosati et al. Table 3. Average results (costs) for classes of instances (sizes expressed by number of tours) and methods. Size CMSA L-GIHH B&P 102822.9 104333.0 104296.4 104926.2 103921.5 103491.6 103467.3 121141.9 123225.6 123304.0 123632.2 122502.9 122198.6 122321.8 118524.2 138760.3 140914.0 140508.0 140482.4 139931.8 139648.2 139551.9 134513.8 155078.3 157426.0 156862.4 156296.4 155520.8 155560.8 155649.6 150370.8 100 171786.7 174501.8 172909.0 172916.0 171901.0 171879.8 172763.7 172582.2 150 263387.7 266705.5 265492.3 265654.8 – 200 349017.0 354408.4 353494.9 350747.2 – 250 439234.5 446525.0 446000.9 443845.8 – the hyper-heuristics 5 runs per instance were executed. TS and B&P are deterministic, so runs are not repeated. All algorithms worked with time limits of 1 h, except for B&P, which was allowed up to 2 h. Values in bold report best results within metaheuristics, while underlined values are the best values including also the exact approach. We can observe that CMSA outperforms other metaheuristics on all instance groups but the smallest one, sized 10. In general, the best results for instances up to size 90 remain the one set by the B&P, whilst, for larger instances, CMSA gets the new best results. For larger instances sized 150, 200 and 250 tours, only data for SA, HC and TS are available for comparison. Table 4 shows mean values of the objective function collected from the same CMSA experiments as those presented in Table 3 after 15 min (900 s) and 30 min (1800 s). We compare them with the state-of-the-art metaheuristic, which is specified in the column benchmark. We report also the results of the B&P, which has a time limit of 2 h, but it may stop before, if an optimal solution is found, so actual B&P execution time is specified as well. Results that improve or equal the current state-of-the-art within metaheuristics are marked in bold. Data show that CMSA converges very quickly toward good solutions. After 15 min it already shows better results than other metaheuristics for 10 out of 13 instance classes, and for 11 out of 13 after 30 min. For instances sized 100, CMSA after 15 or 30 min is already capable to perform better also than the exact method in 2 h, but not better than the hyper-heuristic L-GIHH. Data suggest also that CMSA is not likely to get stuck on early local minima, as we can see always a consistent decrease of the cost function value over time. Finally, the fact that CMSA is able to provide good solutions quickly may be interesting for real-world applications, where human decision makers are likely to prefer to wait short times to have in hand the results of the automated scheduling. Construct, Merge, Solve and Adapt for Bus Driver Scheduling Table 4. CMSA results (costs) measured after 15, 30, and 60 min (900, 1800 and 3600 s), and comparison with state-of-the-art metaheuristics and B&P. Best values among metaheuristic methods are in bold. Instances sizes CMSA - average 900 s 1800 s 3600 s Benchmark Method 3600 s SA B&P Time 30745.9 L-GIHH 50817.2 L-GIHH 68499.9 GIHH 86389.2 L-GIHH 103206.0 102998.3 102822.9 L-GIHH 103467.3 121734.6 121410.7 121141.9 GIHH 6460.4 118524.2 139397.4 139073.7 138760.3 L-GIHH 139551.9 5912.4 134513.8 155674.5 155387.4 155078.3 CH-PR 172447.3 172086.9 7390.4 150370.8 171786.7 L-GIHH 171833.5 7395.8 172582.2 264261.6 263803.9 263387.7 HC 350638.9 349707.2 349017.0 TS 441917.3 440364.5 439234.5 TS We applied the CMSA metaheuristic to BDS, a complex and challenging realworld problem that integrates scheduling and routing issues. CMSA turned out to compare favourably with the state-of-the-art metaheuristics for this problem. In particular, it showed good performances on the large instances, which are in general the most critical ones. In the future, we plan to investigate the use of feature-based tuning mechanisms, in which the parameters are not fixed to specific values, but are computed by functions of the features of instance. Indeed, our analysis highlighted that the best parameter configuration depends on some of the features, in particular those related to the size of the instance. We would like to study also the option of performing an online tuning of the CMSA parameters, so that the parameters are adjusted during the single execution of the algorithm, using some learning mechanism. Finally, we will investigate the use of different techniques for the building blocks of the CMSA technique, in particular for the construct phase. To this aim, we plan to test both other greedy techniques and some form of backtracking procedure to generate shifts with suitable characteristics. Acknowledgements. We thank Tommaso Mannelli Mazzoli for helpful discussions about the BDS problem and for sharing the code of the problem validator with us. R. M. Rosati et al. Roberto Maria Rosati acknowledges support by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215, which facilitated his research stay at the IIIA-CSIC. The financial support by the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association is gratefully acknowledged by Lucas Kletzander and Nysret Musliu. Finally, Christian Blum acknowledges support by grant PID2019-104156GB-I00 funded by MCIN/AEI/10.13039/501100011033. References 1. Adriaensen, S., Now´e, A.: Case study: an analysis of accidental complexity in a state-of-the-art hyper-heuristic for HyFlex. In: 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 1485–1492. IEEE (2016) 2. Birattari, M., Yuan, Z., Balaprakash, P., St¨ utzle, T.: F-race and iterated F-race: an overview. In: Experimental Methods for the Analysis of Optimization Algorithms, pp. 311–336 (2010) 3. Blum, C., Pinacho, P., L´ opez-Ib´ an ˜ez, M., Lozano, J.A.: Construct, merge, solve & adapt a new general algorithm for combinatorial optimization. Comput. Oper. Res. 68, 75–88 (2016) 4. Blum, C., Raidl, G.R.: Hybridization based on problem instance reduction. In: Blum, C., Raidl, G.R. (eds.) Hybrid Metaheuristics. AIFTA, pp. 45–62. Springer, Cham (2016). https://doi.org/ 10.1007/978-3-319-30883-8 3 5. Chuang, C.Y.: Combining multiple heuristics: studies on neighborhood-base heuristics and sampling-based heuristics. Ph.D. thesis, Carnegie Mellon University (2020) 6. Constantino, A.A., de Mendon¸ca Neto, C.F.X., de Araujo, S.A., Landa-Silva, D., Calvi, R., dos Santos, A.F.: Solving a large real-world bus driver scheduling problem with a multi-assignment based heuristic algorithm. J. Univ. Comput. Sci. 23(5), 479–504 (2017) 7. De Leone, R., Festa, P., Marchitto, E.: Solving a bus driver scheduling problem with randomized multistart heuristics. Int. Trans. Oper. Res. 18(6), 707–727 (2011) 8. Desrochers, M., Soumis, F.: A column generation approach to the urban transit crew scheduling problem. Transp. Sci. 23(1), 1–13 (1989) 9. Ferrer, J., Chicano, F., Ortega-Toro, J.A.: CMSA algorithm for solving the prioritized pairwise test data generation problem in software product lines. J. Heuristics 27(1), 229–249 (2021) 10. Hammersley, J.M., Handscomb, D.C.: Monte Carlo Methods. Chapman and Hall, London (1964) 11. Ibarra-Rojas, O., Delgado, F., Giesen, R., Mu˜ noz, J.: Planning, operation, and control of bus transport systems: a literature review. Transp. Res. Part B Methodol. 77, 38–75 (2015) 12. Kletzander, L., Mazzoli, T.M., Musliu, N.: Metaheuristic algorithms for the bus driver scheduling problem with complex break constraints. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 232–240 (2022) 13. Kletzander, L., Musliu, N.: Solving large real-life bus driver scheduling problems with complex break constraints. In: Proceedings of the International Conference on Automated Planning and Scheduling, vol. 30, pp. 421–429 (2020) Construct, Merge, Solve and Adapt for Bus Driver Scheduling 14. Kletzander, L., Musliu, N.: Hyper-heuristics for personnel scheduling domains. In: Proceedings of the International Conference on Automated Planning and Scheduling, vol. 32, pp. 462–470 (2022) 15. Kletzander, L., Musliu, N., Van Hentenryck, P.: Branch and price for bus driver scheduling with complex break constraints. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 11853–11861 (2021) 16. Lewis, R., Thiruvady, D., Morgan, K.: Finding happiness: an analysis of the maximum happy vertices problem. Comput. Oper. Res. 103, 265–276 (2019) 17. Li, J., Kwan, R.S.: A fuzzy genetic algorithm for driver scheduling. Eur. J. Oper. Res. 147(2), 334–344 (2003) 18. Lin, D.Y., Hsu, C.L.: A column generation algorithm for the bus driver scheduling problem. J. Adv. Transp. 50(8), 1598–1615 (2016) 19. Louren¸co, H.R., Paix˜ ao, J.P., Portugal, R.: Multiobjective metaheuristics for the bus driver scheduling problem. Transp. Sci. 35(3), 331–343 (2001) 20. Martello, S., Toth, P.: A heuristic approach to the bus driver scheduling problem. Eur. J. Oper. Res. 24(1), 106–117 (1986) 21. Misir, M., De Causmaecker, P., Vanden Berghe, G., Verbeeck, K.: An adaptive hyper-heuristic for CHeSC 2011. In: OR53 Annual Conference, Date: 2011/09/06– 2011/09/08, Location: Nottingham, UK (2011) 22. Pinacho-Davidson, P., Bouamama, S., Blum, C.: Application of CMSA to the minimum capacitated dominating set problem. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 321–328 (2019) 23. Portugal, R., Louren¸co, H.R., Paix˜ ao, J.P.: Driver scheduling problem modelling. Public Transp. 1(2), 103–120 (2008) 24. Shen, Y., Kwan, R.S.K.: Tabu search for driver scheduling. In: Fandel, G., Trockel, W., Aliprantis, C.D., Kovenock, D., Voß, S., Daduna, J.R. (eds.) Computer-Aided Scheduling of Public Transport, vol. 505, pp. 121–135. Springer, Heidelberg (2001). https://doi.org/10.1007/978-3-642-56423-9 7 25. Smith, B.M., Wren, A.: A bus crew scheduling system using a set covering formulation. Transp. Res. Part A General 22(2), 97–108 (1988) 26. Urli, T.: json2run: a tool for experiment design & analysis. CoRR abs/1305.1112 (2013) 27. Wren, A.: Scheduling vehicles and their drivers-forty years’ experience. University of Leed, Technical report (2004) Topic Modelling and Frame Identification for Political Arguments Shohreh Haddadan1 , Elena Cabrio2 , Axel J. Soto3,4 , and Serena Villata2(B) 1 University of Luxembourg, Esch-sur-Alzette, Luxembourg [emailprotected] 2 Université Côte d’Azur, CNRS, Inria, I3S, Nice, France {elena.cabrio,serena.villata}@univ-cotedazur.fr 3 Universidad Nacional del Sur, Bahía Blanca, Argentina Institute for Computer Science and Engineering (CONICET–UNS), Bahía Blanca, Argentina [emailprotected] Abstract. Presidential debates are one of the most salient moments of a presidential campaign, where candidates are challenged to discuss the main contemporary and historical issues in a country. These debates represent a natural ground for argumentative analysis, which has been always employed to investigate political discourse structure in philosophy and linguistics. In this paper, we take the challenge to analyse these debates from the topic modeling and framing perspective, to enrich the investigation of these data. Our contribution is threefold: first, we apply transformer-based language models (i.e., BERT and RoBERTa) to the classification of generic frames showing that these models improve the results presented in the literature for frame identification; second, we investigate the task of topic modelling in political arguments from the U.S. presidential campaign debates, applying an unsupervised machine learning approach; and finally, we discuss various visualisations of the identified topics and frames from these U.S. presidential election debates to allow a further interpretation of such data. Keywords: Argument mining · Framing · Political debates Argumentation is a rhetoric means used by politicians to put forward their own arguments in front of their audience. As highlighted by Boydstun et al. [3], candidates strive to focus the debate on a topic that advantages them and/or their party. A candidate whose party’s or administration’s economy was thriving would either prefer to discuss topics related to economy or try as much as she can to portray her arguments on other topics from the perspective of economics. The later strategy is referred to as framing in rhetoric. Entman [10] defines framing as follows: “To frame is to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 268–281, 2023. https://doi.org/10.1007/978-3-031-27181-6_19 Topic Modelling and Frame Identification for Political Arguments to promote a particular problem definition, casual interpretation, moral evaluation and/or treatment recommendation.” In the U.S. presidential debates, topics are customarily demanded by the audience either explicitly (i.e., through questions by moderators), e.g., the Iraq war which dominates debates in 2004, or implicitly, i.e., as an important issue which the audience might crave hearing about like the Watergate scandal in 1976. Topics and frames cover two different viewpoints on the arguments put forward in the debate. On the one hand, topics are identified by the keywords that make them distinct from the other topics. The language or the set of keywords describing the topic of an argument are the same regardless of the stance the debater is taking towards this topic, e.g., Iraq, war, military, Saddam Hossein. On the other hand, framing is how an argument by a debater is put forward through selected words to react to the discussion about the topics in debate. Lakoff [11] highlights the importance of framing in political speeches and debates by giving an example from the United States politics. He points out that the term “tax relief” introduced by George W. Bush’s administration puts the topic of “taxation” in a frame which implies that the party who is advocating taxation is a villain, while the (Republican) party against it is relieving people from this affliction. In the example below, about the topic of “death penalty”, from the 1988 U.S. presidential elections, the candidate from the Democratic party Micheal Dukakis chooses words such as education and prevention in his premises against death penalty, whilst the Republican candidate George H.W Bush uses words like inhibiting, rape and brutalization. Dukakis’s choice of words portrays his argument on death penalty in a different framing dimension than Bush’s one, and this is how their stance for and against death penalty is formed. Both arguments are on the topic of “death penalty”. Thus, framing can be a determining factor in the recognition of the stance for or against a topic in a debate, as framing defines the aspects about which a topic can be discussed. 1. Bush-Dukakis, September 25, 1988: DUKAKIS: “I’m opposed to the death penalty. I think everybody knows that. I’m also very tough on violent crime. And that’s one of the reasons why my state has cut crime by more than any other industrial state in America. It’s one of the reasons why we have the lowest murder rate of any industrial state in the country. It’s one of the reasons why we have a drug education and prevention program that is reaching out and helping youngsters all over our state, the kind of thing I want to do as president of the United States... ” LEHRER: “Response, Mr. Vice President.” BUSH: “... And I favor the death penalty. I know it’s tough and honest people can disagree. But when a narcotics wrapped up guy goes in and murders a police officer, I think they ought to pay with their life. And I do believe it would be inhibiting. And so I am not going to furlough men like Willie Horton, and I would meet with their, the victims of his last escapade, the rape and the brutalization of the family down there in Maryland. Maryland would not extradite Willie Horton, the man who was furloughed, the murderer, because they didn’t want him to be furloughed again. And so we have a fundamental difference on this one.” S. Haddadan et al. The automatic identification of topics and frames in political argumentation is therefore of main importance to enrich argument-based information like the claims put forward in the debate, the supporting evidence, and the relations between the identified arguments. In this paper, we address this issue on the ElecDeb60To16 dataset1 of U.S. political presidential debates [12,14]. More precisely, the three main contributions of this paper are the following: first, we apply transformer-based language models (i.e., BERT and RoBERTA) to classify generic frames on the “Media Frame Corpus” dataset with five topics, showing that these models improve the results achieved in the literature for the task of frame identification; second, we apply an unsupervised machine learning approach for topic modeling which takes advantage of sentence embeddings to represent the debates. This approach integrates the argument components in the debates as a source to extract issue-specific frames from the ElecDeb60To16 dataset; finally, we provide some visualisations of the identified topics and frames from different debates which allow for insightful interpretations. Such visualisations are also meant to be consumed by lay people, hence enabling the use of NLP methods by non-technical persons. Related Work In this section, we discuss the related literature focusing on Argument Mining (AM) with topic modeling and frame identification in the political domain. In computational studies of rhetoric in the political domain, two main definitions of frames have been discussed. In the first one, frames are defined in a certain predefined dimension space as in Boydstun et al. [4]: this definition is referred to as generic frames [20]. The other approach considers frames as an extra topic dimension to a speech which is defined by the choice of words in a statement [1,25]. This is referred to as issue-specific frames. The “Policy Frames Codebook” [4] considers the following 15 frames to be comprehensive enough to cover most issue-specific frames in most topics: economic frames, capacity and resources frames, morality frames, fairness and equality frames, constitutionality and jurisprudence frames, policy prescription and evaluation frames, law, order, crime and justice frames, security and defense frames, health and safety frames, quality of life frames, cultural identity frames, public opinion frames, political frames, external regulations and reputation frames and other frames. Boydstun et al. [4] discuss that issue-specific frames such as “right of life for a fetus” in the argument against the topic of “abortion” can be interpreted to fall into one of such generic framing dimensions, i.e., “Morality”. From the computational point of view, some approaches address the issue of automatically identifying topics and classifying frames in text. Nguyen et al. [21] introduce the concept of Hierarchical Ideal Point Topic Model (HIPTM) to identify Ideal Points from the bill speeches and voting patterns of Republican legislators in the U.S. Congress. Using the hierarchy of topics, they identify the issue-specific frames politicians used in their arguments. Tsur et al. [25] 1 https://github.com/pierpaologoffredo/disputool2.0/tree/main/Dataset/ElecDeb60 To16. Topic Modelling and Frame Identification for Political Arguments analyse framing strategies in an unsupervised setting using topic models fitted on time series through regression methods. The authors use this framework to identify temporal topic relations, and expressed agendas and analysis of framing dynamics known as “political spin”. They use lagged dependency between two time series to uncover framing patterns or attention shifts in the campaigns. The data they use consists of 134000 statements made by 641 representatives (i.e., members of Congress) between two Congressional Elections in 2010 and 2012. In alignment with the growth of attention towards applying computational methods in identifying frames in the social/political domains, Card et al. [5] build the Media Frame Corpus of news articles annotated with the above mentioned generic frames and the tone of the article (i.e., pro, anti, neutral). We describe this dataset in detail in Sect. 3. Hartmann et al. [15] also introduce a dataset of online fora discussions extracted from the Argument Extraction Corpus [24] annotated with a subset of generic frames from the Policy Frame Cookbook. Finally, Naderi and Hirst [20] compare several baselines with neural network based methods on multi-class classification and one-against-others classification to identify the generic frames on the Media Frame Corpus. They achieve highest accuracy using GRUs with pre-trained GloVe embeddings as features. Ajjour et al. [1] also leverage the concept of framing in arguments. They define frames to be non-overlapping aspects of arguments on the same subject while concealing other aspects. In this context, frames are aspects taken for/against a controversial issue. They build a dataset of premise-conclusion pairs from Debatepedia2 , and annotate each pair with a few key-phrases, that are then lemmatised and unified. As an example of unification, the terms unhealthy, non-smoker, and US business are transformed to health, smoker and business, respectively. Counting the number of labels for each pair, frames are considered as generic when the label is used for more than one argument, and as topic-specific when the label is in one argument pair only. The final dataset includes 7052 generic frame arguments (i.e., economics, public opinion, environment, feasibility, rights, democracy, crime, politics, security and safety), and 5274 specific frame arguments. They first cluster the documents into topics using TF-IDF features of the debate and argument components with k-means, then they remove the topic from these clusters by using the prominent terms for each topic extracted by C-TF-IDF3 and again cluster the results into frames. Analogously, Dumani et al. [9] consider the classification of stances and frames as a preliminary stage of argument clustering for the argument retrieval task. Also reframing, i.e., controllable text generation, has recently attracted attention in similar studies. The aim of reframing is to change the perspective of the argument with respect to its target audience and the aspects that might be more appealing to it. Chen et al. [7] train neural models to reframe sentences on the Media Frame Corpus. They apply a sentence-level blank filling method. Chakrabarty et al. [6] create a parallel dataset of arguments with mutual purpose but different framings. Then, they apply a text generation method along with textual entailment to reframe the arguments. 2 3 http://www.debates.org. C stands for cluster. S. Haddadan et al. In this section, we present the datasets we used in this paper for our experiments. – Media Frame Corpus: It consists of English news articles on 5 controversial topics (gun control, death penalty, same sex marriage, immigration and smoking) annotated with general frames using the 15 framing dimensions introduced in [4], on three different levels: 1) headline frame, 2) primary frame, and 3) span level [5]. The following example from Card et al. [5] depicts a piece of a news article from the 2006 editorial in the Denver Post on the topic of immigration annotated with headline and span frames. • [WHERE THE JOBS ARE]Economic [Critics of illegal immigration can make many cogent arguments to support the position that the U.S. Congress and the Colorado legislature must develop effective and well-enforced immigration policies that will restrict the number of people who migrate here legally and illegally.]Public opinion [It’s true that all forms of immigration exert influence over our economic and [cultural make-up.]Cultural identity In some ways, immigration improves our economy by adding laborers, taxpayers and consumers, and in other ways [immigration detracts from our economy by increasing the number of students, health care recipients and other beneficiaries of public services.]Capacity ]Economic [Some economists say that immigrants, legal and illegal, produce a net economic gain, while others say that they create a net loss.]Economic There are rational arguments to support both sides of this debate, and it’s useful and educational to hear the varying positions. The Inter-Annotator Agreement (IAA) of primary frames in three stages of annotation is reported between 0.4 and 0.6 based on Krippendorf’s α, which is considered as moderate agreement [2]. However, due to the complexity of overlapping span-level annotation, IAA on this task is at highest 0.23 on one of the topics, which is in any case higher than chance agreement.4 – ElecDeb60To16: This dataset contains the transcripts of the speeches from the candidates during final stages of the US presidential debates between the two major parties (in 1980 and 1992 independent candidates were also included in the final debates). The dataset contains 6666 speech turns from 41 different debates through these years (1960–2016). No annotation on Frames/Topics is available for this dataset. However, each debate has been segmented in sections where the moderator asks a new question. There are 467 sections in all debates, in average each debate contains approximately 12 sections. This dataset is annotated with argument components and relations, which we will profit from in the methodology adopted in this paper [14]. For more details, see https://github.com/dallascard/media_frames_corpus. Topic Modelling and Frame Identification for Political Arguments Topic Modeling and Frame Classification for Arguments In this section, we describe the two tasks we focus on to enrich the analysis of political argumentation. The first task consists in uncovering the topics discussed in political debates data. Following the work of Ajjour et al. [1] for unsupervised issue-specific frame identification, we also apply a hierarchical topic modeling approach using sentence-based transformer language models to discover the framing of the arguments by the presidential candidates in the ElecDeb60To16 dataset. This experimental setting is discussed in Sect. 4.1. Secondly, we focus on identifying frames in arguments occurred in the same dataset of presidential debates. We adopt a frame identification approach by training a supervised model using transformer-based language models on the “Media Frame Corpus” to classify generic frame spans and primary frames. We later use this model to classify frames in the ElecDeb60To16 dataset. This experimental setting is discussed in Sect. 4.2. 4.1 Generic Frame Classification Naderi and Hirst [20] applied different approaches on the Media Frame Corpus data to classify frames at sentence level, and achieved best results with LSTMbased neural networks. In this paper, we employ transformer-based models like BERT [8] and RoBERTa [16] to address the task of generic frame classification and compare our results with those obtained by Naderi and Hirst [20]. It is noteworthy that their experiments were done on version v1.0 of the Media Frame Corpus, whilst we run ours on v2.0. To address a fair comparison, we implemented the experiments of Naderi and Hirst [20] and ran them on v2.0 of the dataset, applying the same data pre-processing. We use the pre-trained embeddings of BERT (uncased) and RoBERTa, and used a softmax function for the labels of sequence classification. The fine-tuning process is done in 4 epochs using an Adam optimiser with learning rate of 2−e5 and epsilon parameter of 1e−8 . The Media Frame Corpus contains frame annotations at article level (primary frame) and span level. We perform our experiments on both levels. Furthermore, we perform a cross-topic experiment to evaluate to what extent the fine-tuned model is able to predict the frame on a topic which has not seen before on the training data (albeit from the same dataset). This experiment has been conducted on both primary frames of the news article and the span-level frames. 4.2 Topic Modeling and Issue-Specific Frame Identification In this second experimental setting, we address the topic modeling task on the ElecDeb60To16 dataset, taking advantage of the debate features like questions of the debates, speeches and sections identified in this dataset. Furthermore, we employ this topic modeling approach and the annotated argument components in this dataset to identify issue-specific frames used in candidates’ arguments on various topics. More precisely, this experimental setting includes: S. Haddadan et al. – Topics from questions: We assume that the questions asked by the moderators, panelists and audiences set the theme and determine the topic for the arguments made by candidates. The theme set for the debate is then discussed by the candidates using various frames to structure their arguments concerning the issue/topic/theme. For instance, in Example 2 below, the moderator explicitly sets the topic of the debate to “gun control laws”. 2. Clinton-Trump, October 19, 2016: WALLACE: [...] I want to focus on two issues that, in fact, by the justices that you name could end up changing the existing law of the land. First is one that you mentioned, Mr. Trump, and that is guns. – Topics from Speeches: Occasionally candidates digress from topics set by moderators and set another topic for the rest of the debate. This argumentative technique is called agenda setting [3]. In order to retrieve topics initiated through this rhetorical technique, we also consider extracting topics from the speeches made by candidates during the debates. – Frames from argument components: Frames are provided as a contextual setting for taking a stance for or against an argument. For instance, the topic of “tax laws” can be argued in different frames such as “middle class famililes”, “small business owners”. The two argument components annotated on the debate of September 26th, 2008 in Example 3 below indicate two different frames provided by the two candidates (belonging to different parties) on the topic of “taxation law”. Based on this evidence, we assume that extracting more detailed topics from the argument components may help retrieve the frames about the discussed topics. 3. Maccin-Obama, September 26, 2016: Obama: And I think that the fundamentals of the economy have to be measured by whether or not the middle class is getting a fair shake. Maccian: Senator Obama’s secret that you don’t know is that his tax increases will increase taxes on 50 percent of small business revenue. Topic modeling [26] has been used for a long time along with bag of words features and Latent Dirichlet Allocation (LDA) or other matrix factorisation models. Recently, with the advancement of Language Models, topic modeling has also been adapted to the use of transformer-based models such as BERT [8], and later on sentence embeddings [22]. In order to obtain these sentence embeddings, Reimers and Gurevych [22] add a pooling layer (MEAN pooling as a default) on top of the pre-trained BERT output layer to get a fixed size vector. Then they rely on siamese and triplet neural networks, previously used in machine vision [23], to fine-tune the model. They use different objective functions (Classification, Regression, Triplet) depending on the task. The sentence embeddings resulting from this model are shown to improve the results of many semantic textual similarity tasks and clustering algorithms. We apply some pre-processing steps on the text inputs (i.e., questions, speeches and argument components) before encoding them using the sentence embedding model proposed by Reimers and Gurevych [22]. This pre-processing Topic Modelling and Frame Identification for Political Arguments Fig. 1. Overall architecture of the clustering system implemented for topic modeling and issue-specific frame identification. includes replacing the name of candidates in the debates by “candidate” or “other candidate” depending on the speaker. Speeches shorter than 16 tokens (word tokeniser function from the nltk library [17]) have been removed, as well as interruptions and cut-off speeches, such as “Can I respond?”, “Thank you”. Applying this pre-processing - based on the assumption that these speeches do not contribute to the topic distribution in the debates-, ∼25% of the speeches are set aside from the clustering data input. We then apply a topic modeling approach on the input, which is implemented with a density based clustering method called HDBSCAN [18]. In this way, we cluster documents based on their encoded representations using sentenceembeddings, based on the implementation of Grootendorst and Reimers [13]. They reduce the dimensions of the input encoded by the sentence embeddings with UMAP [19], and they implement c-tf-idf to automatically extract the prominent terms characterising each cluster. We adopt this architecture on the different levels to extract the topics and frames in the debates, employing the annotated argument structure. Our architecture is visualized in Fig. 1. In this section, we report the obtained results for the two tasks presented in Sect. 4, and we discuss some visualisations that helps to get a better understanding of the identified topics and frames. S. Haddadan et al. Table 1. Multi-class classification results (accuracy) of different methods on sentences on the 5 most common frames: Economic, Legality Constitutionality Jurisdiction, Policy Prescription and Evaluation, Crime and Punishment, Political. Method Death pen. Immigr. Same sex Tobacco Gun contr. All marriage BiLSTM no pre-trained embeddings BiLSTM Glove emb. BiLSTM Glove emb. updating weights GRU (Glove emb.) GRU Glove Emb. updating 0.7555 weights LSTM (Glove emb.) Naderi and Hirst [20] provide the results for generic frame classification both for all classes of generic frames and also the results of multi-class classification for the most occurring 5 frames, which cover more than 60% of the data, namely: Economic, Legality constitutionality and jurisprudence, Policy prescription and evaluation, Crime and punishment, and Politics. We also run our experiments on all these classes. Table 1 compares the results of the multi-class classification on the 5 most common frames using the methods applied by Naderi and Hirst [20] with the fine-tuning of pre-trained transformer-based models BERT and RoBERTa. It is worth noticing that results improve by at least 0.7% when applying frame classification on all topics. The results of the multi-class classification of news articles from the Media Frame Corpus on all frames are reported in Table 2. Table 3 shows the results of the primary frame classification task using the same methods. Results in all experiments shows an improvement using the fine-tuning of pre-trained BERT and RoBERTa. Table 4 shows the results of span-level frame identification, when the articles of a particular topic is left out from the training data and used only in the test set. Results indicate that span-level identification does not change drastically whilst the primary frame identification seems to be very correlated with the topic being used in the train data, leading to a substantial impairment in the results which are therefore not reported. Due to the highly imbalanced number of frame classes, we report the weighted measures for F-score in all results. We illustrate the results of topic modeling and issue-specific frame identification using some visualisations techniques to get a better understanding of the obtained results. In Fig. 2, the size of each bubble represents the topic frequency while the colour is given by the party that uses that topic the most. The figure on the left shows how prominent the topic of “Iraq and Afghanistan war” is in Topic Modelling and Frame Identification for Political Arguments Table 2. Multi-class classification results (in terms of accuracy) of different methods on sentences on all 15 frames, plus the irrelevant class. Method Death pen. Immigr. Same sex marriage Tobacco Gun contr. All BiLSTM Glove emb. upd. weights 0.6027 GRU (Glove emb. upd. weights) LSTM (Glove emb.) upd. weights 0.6109 Table 3. Multi-class classification results of primary frames of articles. Method Death penalty P Immigration F1 0.7064 0.7042 0.6890 Same sex marriage F1 0.6444 0.6469 0.6309 0.7086 0.7129 0.6880 0.7077 0.7061 0.7068 0.6482 0.6423 0.6349 0.7376 0.7371 0.7363 0.7208 0.7167 0.7070 0.6981 0.6906 0.6827 0.7742 0.7678 0.7575 BERT-uncased-base 0.7071 0.7081 0.6964 0.8284 0.8262 0.8088 0.7491 0.7167 0.7070 0.8248 0.8404 0.8286 0.7160 0.7167 0.7126 0.7240 0.7343 0.7256 Tobacco P Gun control R ALL F1 0.6715 0.6589 .0.6440 0.9139 0.9118 0.9106 0.5514 0.5604 0.5417 0.6100 0.5845 0.5970 0.9084 0.9057 0.9046 0.5635 0.5761 0.5661 0.7046 0.7013 0.6950 0.9225 0.8348 0.8627 0.7506 0.7531 0.7495 BERT-base-uncased 0.7389 0.7652 0.7387 0.9413 0.9416 0.9377 0.8540 0.8547 0.8540 0.9238 0.9260 0.9208 0.8086 0.8114 0.8093 0.7409 0.7591 0.7307 2004, while the figure on the right shows that the topic of “schools and education” is twice as much discussed by the Democratic candidate than by his opponent in 1960. This visualisation also reveals the participation of each candidate in each topic (e.g., in the second figure, on the 16% of the speeches on “school and education”, 10.71% were from Kennedy and only 4.11% from Nixon). Figure 3 shows the distribution of frames on the topic of abortion in 1984. Two of the highest occurrence of frames in the topic of abortion represented by topic words “abortion, women, life, child” are “church, faith, religion, religious, catholic, prayer, separation, practice, state” and “abortion, abortions, life, prolife, unborn, rape, birth, child, reduce, incest” from argument components. Figure 4 also illustrates the highest ranking frames on the topic of energy in 1980 to be “oil, drilling, gas, offshore, gasoline, dependence, pipeline, production, natural” ,“environment, clean, environmental, water, air, pollution, toxic, waste, standards” and “energy, solar, independence, wind, coal, policy, alternative, gas, independent”. The keywords dependence and independence refer to the energy production in the U.S. being dependant on other countries. S. Haddadan et al. Table 4. Multi-class classification results of sentences of articles taking one set of articles as test set after fine-tuning. Test set Precision Recall F-score Size of test set Death penalty 0.6084 0.5958 0.5413 0.5251 0.5901 0.5919 Gun control 0.6027 0.6070 Same sex marriage 0.6546 0.6521 0.6486 Fig. 2. Visualisation of the distribution of topics in 2004 and 1960. Fig. 3. Distribution of frames over the topic of Abortion in 1984. Topic Modelling and Frame Identification for Political Arguments Fig. 4. Distribution of frames over the topic of Energy in 1980. Concluding Remarks In this paper, we presented a new architecture to automatically identify and classify the topics and frames in political debates, namely the debates of the US presidential campaigns from 1960 to 2016. Our extensive empirical evaluation shows good results, outperforming standard baselines and similar approaches [20]. Finally, we proposed some intuitive visualisations of the extracted topics and frames which allow to get a better understanding about the nuances of the argumentation. Future work perspectives include, in addition to an improvement in the obtained results, a time-guided analysis of the evolution of the topics and frames in the U.S. presidential debates, with the goal to highlight how the way politicians discuss these topics has changed over time. Acknowledgments. This work was partly supported by the French government, through the 3IA Côte d’Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. This work was partly supported also by EU Horizon 2020 project AI4Media, under contract no. 951911 (https://ai4media.eu/), and MIREL (http://www.mirelproject.eu/), under contract no. 690974. Shohreh Haddadan hereby acknowledges that this research is supported by the Luxembourg National Research Fund (FNR) (10929115). References 1. Ajjour, Y., Alshomary, M., Wachsmuth, H., Stein, B.: Modeling frames in argumentation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2922–2932. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/D19-1290, https://www. aclweb.org/anthology/D19-1290 2. Artstein, R., Poesio, M.: Inter-coder agreement for computational linguistics. Comput. Linguist. 34(4), 555–596 (2008). https://doi.org/10.1162/coli.07-034-R2, https: S. Haddadan et al. 3. Boydstun, A.E., Glazier, R.A., Pietryka, M.T.: Playing to the crowd: agenda control in presidential debates. Polit. Commun. 30(2), 254–277 (2013). https:// doi.org/10.1162/coli.07-034-R2, https:// www.tandfonline.com/doi/abs/10.1080/ 10584609.2012.737423 4. Boydstun, A.E., Gross, J.H., Resnik, P., Smith, N.A.: Identifying media frames and frame dynamics within and across policy issues. In: New Directions in Analyzing Text as Data Workshop, London (2013) 5. Card, D., Boydstun, A.E., Gross, J.H., Resnik, P., Smith, N.A.: The media frames corpus: annotations of frames across issues. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pp. 438–444. Association for Computational Linguistics (2015). https://doi.org/10. 3115/v1/P15-2072, https://www.aclweb.org/anthology/P15-2072 6. Chakrabarty, T., Hidey, C., Muresan, S.: ENTRUST: argument reframing with language models and entailment. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4958–4971. Association for Computational Linguistics (2021). https://doi.org/10.18653/v1/2021.naacl-main.394, https://aclanthology. org/2021.naacl-main.394 7. Chen, W.F., Al Khatib, K., Stein, B., Wachsmuth, H.: Controlled neural sentencelevel reframing of news articles. In: Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 2683–2693. Association for Computational Linguistics (2021). https://aclanthology.org/2021.findings-emnlp.228 8. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT 2019, pp. 4171–4186 (2019). https://doi.org/10.18653/v1/n19-1423 9. Dumani, L., Wiesenfeldt, T., Schenkel, R.: Fine and coarse granular argument classification before clustering. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM 2021, pp. 422–432. Association for Computing Machinery (2021). https://doi.org/10.1145/3459637. 3482431 10. Entman, R.M.: Framing: toward clarification of a fractured paradigm. J. Commun. 43(4), 51–58 (1993). https://doi.org/10.1111/j.1460-2466.1993.tb01304.x, https:// academic.oup.com/joc/article/43/4/51-58/4160153 11. George Lakoff: The ALL NEW Don’t Think of an Elephant!: Know Your Values and Frame the Debate. Chelsea Green Publishing (2014). google-Books-ID: FSqPBAAAQBAJ 12. Goffredo, P., Haddadan, S., Vorakitphan, V., Cabrio, E., Villata, S.: Fallacious argument classification in political debates. In: Raedt, L.D. (ed.) Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23–29 July 2022, pp. 4143–4149. ijcai.org (2022). https:// doi.org/10.24963/ijcai.2022/575 13. Grootendorst, M., Reimers, N.: MaartenGr/BERTopic: v0.9.3 - quickfix. Zenodo (2021). https://doi.org/10.5281/zenodo.5574296 14. Haddadan, S., Cabrio, E., Villata, S.: Yes, we can! mining arguments in 50 years of US presidential campaign debates. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4684–4690. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/P19-1463, https://www.aclweb.org/anthology/P19-1463 15. Hartmann, M., Jansen, T., Augenstein, I., Søgaard, A.: Issue framing in online discussion fora. In: Proceedings of the 2019 Conference of the North American Chapter Topic Modelling and Frame Identification for Political Arguments 16. 17. 18. of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1401–1407. Association for Computational Linguistics (2019). https://doi.org/ 10.18653/v1/N19-1142, https://aclanthology. org/N19-1142 Liu, Y., et al.: Roberta: a robustly optimized bert pretraining approach. CoRR abs/1907.11692 (2019). https://arxiv.org/abs/1907.11692 Loper, E., Bird, S.: NLTK: the natural language toolkit (2002). arXiv:cs/0205028, https://arxiv.org/abs/cs/0205028 McInnes, L., Healy, J., Astels, S.: HDBSCAN: hierarchical density based clustering. J. Open Source Softw. 2(11), 205 (2017). https://doi.org/10.21105/joss.00205, https://joss.theoj.org/papers/10.21105/joss.00205 McInnes, L., Healy, J., Melville, J.: UMAP: uniform manifold approximation and projection for dimension reduction (2020). arXiv:1802.03426 [cs, stat], https:// arxiv.org/abs/1802.03426 Naderi, N., Hirst, G.: Classifying frames at the sentence level in news articles. In: Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pp. 536–542. INCOMA Ltd. (2017–09). https://doi.org/ 10.26615/978-954-452-049-6_070 Nguyen, V.A., Boyd-Graber, J., Resnik, P., Miler, K.: Tea party in the house: a hierarchical ideal point topic model and its application to republican legislators in the 112th congress. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1438–1448. Association for Computational Linguistics (2015). https://doi.org/10.3115/v1/P15-1139, https:// aclweb.org/anthology/P15-1139 Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3982–3992. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/D191410, https://www.aclweb.org/anthology/D19-1410 Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815–823. IEEE (2015). https://doi.org/10.1109/ CVPR.2015.7298682, https://ieeexplore.ieee.org/document/7298682/ Swanson, R., Ecker, B., Walker, M.: Argument mining: extracting arguments from online dialogue. In: Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 217–226. Association for Computational Linguistics (2015). https://doi.org/10.18653/v1/W15-4631, https://aclanthology. org/W15-4631 Tsur, O., Calacci, D., Lazer, D.: A frame of mind: using statistical models for detection of framing and agenda setting campaigns. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1629–1638. Association for Computational Linguistics (2015). https:// doi.org/ 10.3115/v1/P15-1157, https://aclweb.org/anthology/P15-1157 Xia, L., Luo, D., Zhang, C., Wu, Z.: A survey of topic models in text classification. In: 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), pp. 244–250 (2019). https://doi.org/10.1109/ICAIBD.2019.8836970 Substitute Plastic Film with Kraft Paper in Automatic Pallet Wrapping: An AI Pipeline Eleonora Iotti1(B) , Alessandro Dal Palù1 , Gianluca Contesso2 , and Francesco Bertinelli2 1 Department of Mathematical, Physical and Computer Sciences, University of Parma, Parco Area delle Scienze 53/A, 43124 Parma, Italy {eleonora.iotti,alessandro.dalpalu}@unipr.it ACMI S.p.A., Via G. Di Vittorio, 60, 43045 Fornovo di Taro, Parma, Italy {gianluca.contesso,francesco.bertinelli}@acmispa.com Abstract. This paper presents and discuss an overview of an AI pipeline to analyze the effects of substituting plastic film with Kraft paper in the tertiary packaging, i.e., in the external envelope of a pallet. Since there is no prior knowledge about paper wrapping yet, the goal is to understand the physics of the load unit—wrapped in paper—when subject to horizontal accelerations. This permits to study and analyze its rigidity and robustness to permanent deformations and/or excessive shifting during road or rail freight, to avoid damages and ripping of the envelope. The idea behind our AI pipeline is to virtually simulate such a situation, to precisely identify critical use cases, and eventually suggest a correction in the wrapping format. The first gain in using such an approach is to drastically reduce the number of physical tests needed to build a solid base knowledge about the behavior of Kraft paper enveloping the pallet during motion. The proposed pipeline consists of three phases: (i) data collection from real tests, (ii) modeling of the simulation, fitting relevant parameters between the actual test and the simulated one, and (iii) performing of virtual experiments on different settings, to suggest the best format. Computer vision and machine learning techniques are employed to accomplish these tasks, and preliminary results show encouraging performances of the proposed idea. Keywords: Multi-physics simulation · Machine learning objects tracking · Automatic pallet wrapping · Multiple For some years now, we have witnessed the rise of a global movement pointing towards a more sustainable future. Such campaign caused a renewed interest Project entitled “Machine learning to substitute LLDPE plastic film with Kraft paper in automatic pallet wrapping,” supported by ACMI S.p.A. and funded with D.M. 10.08.2021 n.1062 on FSE REACT-EU, by Ministero dell’Università e della Ricerca (MUR), under the Programma Operativo Nazionale (PON) “Ricerca e Innovazione” 2014–2020–Azione Green. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 282–296, 2023. https://doi.org/10.1007/978-3-031-27181-6_20 An AI Pipeline to Substitute Plastic w. Paper in Automatic Pallet Wrapping from companies and institutions, that resulted in the search for novel technologies to reduce pollution and plastic usage, and in the advance of strategic actions to change their course. Regarding the public institutions, this translates into long-term objectives and into the development of resolutions and plans like, for example, the European Strategy for Plastics in a Circular Economy [9]. In particular, such a strategy was put in place in January 2018, and still applies in the broader context of the European Green Deal [10] which consists of a set of proposals, actions, and funding during the five-year term 2019–2024. The EU strategy for plastics imposed a crucial challenge to companies and industries working in the field of packaging, especially for food and beverages. As a matter of fact, the majority of primary and secondary packaging (respectively the single product actual container and their grouping into a larger set for handling purposes) for food and beverages consists of multi-layered plastic, which is not recyclable, but still plays an important role in food safety compared to other available materials. A recent survey on the effects of the EU plastic resolution pointed out these issues, and definitely stated that there are currently “no viable alternatives” that could ensure the same level of safety and avoidance of food waste [31], and at the same time those conditions, lack of safety and risk to waste food, are sufficient to produce bad environmental impacts. However, there are also evidences of virtuous examples of plastic elimination or at least reduction and recycling in the primary packaging of food [19]. Despite these discussions, attempts, and efforts on primary and secondary packaging, nowadays the LLDPE (Linear Low Density Polyethylene) stretch film and the heat-shrink wrap are still the best available choices for tertiary packaging of food and beverages (the enclosure or wrapping of stacked secondary packaging onto a pallet for safe transportation and logistics). During years, the amount of plastic material and its thickness were constantly decreased, and the use of recyclable plastic for wrapping made possible in some cases to comply with EU requirements. Managing plastic materials and adapting them to shrink around the loaded pallet is a well-known automation task that is currently efficiently performed by end-of-line packaging machines. Nevertheless, due to this knowhow and due to some of LLDPE main properties, like resistance to water and UV rays, to the best of our knowledge, there were no attempts worldwide to automatic pallet wrapping with sustainable materials. ACMI S.p.A.1 is an Italian manufacturer of high-tech bottling and packaging lines, specialized for beverages and food. ACMI has international relevance, serving both national companies and large multinational groups, such as CocaCola CompanyTM . The recent work of ACMI is significant in the open discussion about plastic, since their novel “green” approach to the end-of-line proposed to replace the external wrapping material from LLDPE to Kraft paper (a recyclable and biodegradable paper with specific elastic strength). This represents the first attempt in substituting plastic tertiary packaging for food and beverages industry. A completely plastic-free end-of-line opens up to a series of engineering and automation challenges which have yet to be explored. 1 E. Iotti et al. One of these challenges is to ensure that the wrapped envelope could withstand to road or rail freight, thus guaranteeing safety for truck workers and avoiding loss of product. This aspect is of key importance in the plastic-paper transition of tertiary packaging and it requires a thorough understanding of the paper behavior in relation to safety aspects, such as, e.g., how many layers of paper are needed for wrapping and how they should be stratified, how much pulling tension has to be applied to paper while wrapping, what is the optimal pallet loading packaging schema for better stability, and so on. Such knowledge should, in turn, give back to engineers some hints for the actual development of the automatic wrapping machine and its controlling software. 1.1 A Note on Methodology and Purposes of This Work The growth and increasing impact of Artificial Intelligence (AI) technologies [24], in almost every aspect of human development opens up also to the challenges posed by the field of automation [37,42]. The research question posed by ACMI’s innovative idea (to wrap pallets of food or beverages products within Kraft paper instead of plastic) requires rethinking the design of so-called wrapping formats, i.e., those series of parameters and indications given to the pallet stretch wrapper to perform the actual wrapping of products. Therefore, the long-term goal of the research project is to develop an intelligent automatic recommendations system that is able to suggest the safer, robust, and most reliable wrapping format to the paper wrapping machine. To pursue such an objective, preliminary studies have to be done with the help of other machinery. In our case, we make use of an in-house special testing bench, which is able to reproduce the actual horizontal acceleration of the transport with the load unit carried on, to control the dynamics parameters and to record a video of the test. This paper focuses on the short-term goal of using such raw data to incrementally build enough knowledge to virtually simulate the behavior of any physical setup, while minimizing the number of actual experiments needed to gain useful information. In summary, the proposed AI pipeline consists of a low-level computer vision system to extract raw data, a realistic multi-body simulation enhanced with a machine learning method to fit the concrete behavior of the pallet during the test, and the use of such a simulation to perform virtual tests and give feedback on a possible improvement of the wrapping format. The approach allows to estimate and simulate any wrapping, from plastic to paper and even to simulate an arbitrary number of wrapping overlapping. Input of the first phase are raw video recordings of load units subject to horizontal accelerations. Those videos are analyzed with standard computer vision techniques. The goal of such techniques is to extract the centers of gravity of the wooden pallet and of the bundles (secondary packages) over the pallet. Moreover, we also extract rotations of each package, in order to detect instability and deformations. The second phase is about the development of a simulation of the test case, by using a multi-physics engine that models the testing machine, its acceleration, and the load unit. At the beginning of this phase, the simulation cannot be realistic, due to the lack of physical parameters related to secondary/tertiary packaging (i.e., static and An AI Pipeline to Substitute Plastic w. Paper in Automatic Pallet Wrapping dynamic friction forces at work). We aim at learning those parameters, and, in turn, the global behavior of the pallet during acceleration, by matching the ideal conditions of the simulation to the actual measurements of the centers of gravity obtained from the vision system. Such a task employs machine learning algorithms to match the actual behavior. Once the physical parameters are accurate enough, the third phase proceeds to identifying critical issues of a specific wrapping, e.g. points with a high risk of ripping of the paper, and suggest corrections. This paper overviews the whole pipeline, covering in particular the implementation details of the first and second phases of the project. Pallets wrapped with paper, as those with plastic envelope, must comply with the European Road Worthiness Directive in order to guarantee the security during rail or road freight. In such a field, safety and reliability are expressed in terms of the European Standard EUMOS 40509 [3], that aims at quantifying the rigidity of the pallet when it is subject to a force (due to an acceleration) along a direction. The situation that aims to be simulated and investigated by such a standard is the motion dynamic of a truck loaded with one or more EUR pallets. The rigidity of the load, in fact, impacts the transport effectiveness, and measuring such quantity is needed to prevent permanent deformations or excessive shifting of the load during transportation, in other words, the stability of the load unit [41]. Moreover, such rigidity and robustness directly impact on the holding strength of the external wrapping, which in turn could be deformed or ripped during motion. Excessive deformations and/or shifting of the loaded units result in a lack of stability of the overall truck and in an unsafe transportation. In detail, the Acceleration Bench Test, in line with EUMOS 40509, defines a test load unit which is typically a pallet with a number of layers of products on it and wrapped with plastic film (or Kraft paper, in our case), and that can be oriented in the LP-direction, i.e., the long side of the pallet is parallel to acceleration direction, or in the BP-direction, i.e., the short side of the pallet is parallel to acceleration direction. Such a test unit with its orientation is then subject, using a special testing machinery detailed later, to an acceleration impulse that immediately stops and gives rise to a constant deceleration, until the load unit stops. In a real setting, the acceleration impulse is modeled as a constant acceleration that lasts for half a second. Typical tests are performed with constant accelerations from 0.2 g up to 0.5 g, which is the acceleration to be supported, as stated by EUMOS 40509. The acceleration may cause permanent deformations and elastic deformations, which are respectively the residual deformations of the load unit after the test and the deformation of the load unit during the test. In the latter case, the tilting of the entire load unit during test is not considered as an elastic deformation, since the wooden pallet is taken as reference for the coordinate system to measure such deformations. Acceleration Bench Test of EUMOS 40509 defines the test setup and some test acceptance criteria, as follows: (i) the permanent displacement of all parts of E. Iotti et al. Fig. 1. The ESTL Machine in the R&D Department of ACMI S.p.A. The acceleration bench holds on a sleight where a wooden pallet is loaded with two layers. Fig. 2. Centers of gravity of pallet and packages, returned by MOSSE tracker the test load unit (after the test) must not exceed 5% of the total height of load unit; (ii) permanent displacement in the lowest 20 cm of the test load unit is to be less than 4 cm on the wooden pallet; (iii) the elastic displacement of all parts of the test load unit (during the test) must not exceed 10% of the total height of load unit; (iv) there must be no visible structural damage and/ or leakage of products at the end of the test. The development new wrapping technologies must cope with the compliance of such criteria. 2.1 The ESTL Machine Fig. 3. Example of static (on the left) and dynamic (on the right) deformations, recorded by ESTL vision system after the test and during the test. Tests are performed with a special testing machine, called here ESTL Machine from the name of the manufacturer company [2]. The pallets are loaded on a movable platform, called sleight, on an acceleration bench. An example is illustrated in Fig. 1. Once the pallet is loaded, the acceleration bench can generate a constant horizontal acceleration impulse, An AI Pipeline to Substitute Plastic w. Paper in Automatic Pallet Wrapping which moves the sleight with the load unit on. The acceleration can be set between 0 m/s2 and 10 m/s2 in steps of 0.5 m/s2 . The duration of the acceleration is at least 500 ms. These parameters permit to simulate road transport events such as diverting maneuvers and/or emergency stops. Usually the pallets are tested at different acceleration levels: tests start at a low acceleration level of 0.2 g or 0.3 g (about 1.962 m/ss and 2.943 m/s2 , respectively). Then, if the result is successful (w.r.t. EUMOS 40509), the constant acceleration impulse is increased by a value of 0.1 g, heading for the legal requirement of 0.5 g for load safety. While testing the acceleration impulse, high speed recordings are made. Three markers are attached to the load unit and two markers on the sleight, as in Fig. 1, so that the ESTL Machine vision system can detect fluctuations of the pallet. In fact, to detect the plastic (or static) deformation of the load unit after the test, measurements are taken at three different points. Those measurements are made before and after the test. The difference between them gives an indication of the plastic deformation. The elastic (or dynamic) deformation, instead, is measured at a height of approximately 1 m, with an ultrasonic sensor. Then, video recordings are annotated by ESTL vision system with the detected boundaries of the load over the wooden pallet, the value (in mm) of the current and maximum deformation and its angle. Figure 3 shows an example of the annotations of the ESTL system on the video recording, denoting the static and dynamic deformations happened during and after the test. The actual acceleration and displacement of the sleight is known, since the ESTL machine employs also an X-Y accelerometer, and the acceleration profile data are recorded and plotted as well, as shown in Fig. 4. The plot shows the detected acceleration and deceleration along the x and y axes in a 0.3 g test, respectively in orange and black colors (it is worth noting that acceleration in the y direction is almost zero for the whole duration of the tests). Theoretical values of speeds (magenta) and displacements (blue) are also plotted. Multiple Tracking of Bundles The first goal of this work is to extract relevant information regarding the behavior of the pallet and bundles during motion. This phase is necessary to understand the dynamic of the sleight-pallet-load system of the ESTL machine in terms of visible displacements of each part of such a system. We want to identify the actual displacement of the load unit, i.e. the wooden pallet and the layers of product, w.r.t. the sleight, and the relative displacements between each couple of elements of the load unit. Each unit of product is called a bundle, and the traceable bundles are only those in front of the camera. In general, the displacement of the pallet and the traceable bundles could vary according to the type of pallet, its orientation, the type of product, and also the primary and secondary packaging make their contribution on the dynamic of the system. Moreover, the possible presence of paper/plastic interlayer between the layers of bundles also impacts the amount of displacement. It can be noticed that the motion of the E. Iotti et al. Fig. 4. Examples of acceleration profiles measured by ESTL machine, with a 0.3 g acceleration impulse setting. (Color figure online) entire load unit is delayed w.r.t. the motion of the sleight, because of friction between the two objects. With the sleight displacement that serves as reference, we can compute the difference between such a displacement and the one of the load unit. The same reasoning could be made to obtain the relative differences of pallet and bundles displacements, and of each layer of bundles. Such differences are strongly related to friction coefficients (static and dynamic) of the pallet over the sleight, of each bundle over the pallet, and of bundles with each other. To the best of our knowledge, we are not aware of similar approaches in the context of estimating friction constants and parameters for simulation of pallet dynamics. In literature there are plenty of AI approaches to process video raw data, in order to detect objects and their positions. Such approaches could be roughly divided into standard computer vision ones and deep learning approaches. Deep neural networks and learning algorithm outperform standard techniques in almost any mainstream recognition/segmentation/detection task, like recognition of common objects [20,21,27,34,35], segmentation of a typical external scene [36], tracking of pedestrians from surveillance camera [17], human pose estimation [38], recognition of handwritten text [28]. Unfortunately, except for some notable examples [33], yet not mature enough for video processing, deep learning methods usually require a huge amount of homogeneous data, that have to be carefully annotated in case of supervised learning. This is one of the reasons why, by deviating from mainstream applications, deep neural networks are difficult to train and not always successful at generalizing information. Moreover, recent criticism pointed out that such networks are often treated as black boxes full of parameters which do not have a intelligible semantics, thus a deep network could not explain its decisions [30]. There are cutting-edge works that try to achieve a natural language explanation from such systems, but those results are still subject of discussion in the field of eXplainable Artificial Intelligence (XAI) [14]. Explainability is of course an appreciable feature of an An AI Pipeline to Substitute Plastic w. Paper in Automatic Pallet Wrapping AI system, but in particular it is crucial for those high risk safety systems. The topic of XAI in critical systems is being addressed by the European Commission, which developed an AI strategy to rule the trustworthiness of AI systems and enhance the excellence in such field [11,12]. In our case, the first phase of the AI pipeline does not require to produce transparent processes or explain its outputs, but those features will be useful in the last phase of the project, where suggestions and recommendations to correct the wrapping format have to be delivered to the final user. Despite this, following a deep learning plan for the development of the computer vision system remains unfeasible due to the requirement of a vast amount of data. Our raw data, in fact, are produced by physical experiment with the acceleration testing machine, and each experiment has a high cost in terms of time and power consumption. Moreover, even if we would like to use pre-trained networks, mainstream datasets on which those networks are trained are too general, and making efforts to switch from their to our domain could result in a global lowering of the network performances (multi-domain methods are still subject of investigations). Therefore, our system was crafted for the specific task, with the aid of standard computer vision algorithms and techniques. Our computer vision system consists of a program which processes the video frame by frame. We employ (i) automatic methods to identify a region of interest (ROI), based on the prediction of the position of the load unit given by the acceleration profile data of the test; (ii) a template matching technique on the ROI to detect bundles; (iii) a set of multi-tracking algorithms tailored on the specific task, with the goal to follow the bundles during motion; (iv) optical flow detection methods to measure the actual displacements and rotations of packages. In order to help the detection of bundles in raw videos we input the system some general template images to be matched to the pallet and to the visible bundles. These templates are subject to standard augmentation by stretching, rotating, cutting the reference image. Given the dimensions of the pallet template and of a bundle template, together with the number of layers loaded on the pallet, the ROI is obtained. In fact, in the few first frames of the video, the load unit is approximately in the center of the visual. Displacement data from ESTL machine, that were obtained from the acceleration profile depicted in Fig. 4, are used to slightly move the ROI frame by frame. This process is correct only at the very beginning of video processing, since the perspective is not much noticeable. The ROI is maintained until the template matching algorithm (a normalized cross-correlation between templates and pixels of the ROI) recognizes all bundles, and the tracking is ready to start. To prevent the explosion of computational times, a Non-Maximum Suppression (NMS) algorithm [32] follows the template matching. We used several state-of-the-art methods for multi-object tracking: from basic Discriminative Correlation Filter with Channel and Spatial Reliability (CSRT) [29] and Kernelized Correlation Filter (KCF) [23], to the AdaBoost-based Boosting tracker [22], and Multiple Instance Learning (MIL) algorithm [15], but also TLD (Tracking-learning-detection) [26], Median-Flow [25], and Minimum Output Sum of Squared Error (MOSSE) [16] trackers have been tested. When tracking starts, E. Iotti et al. the ROI detaches from the acceleration profile models (which in the meantime became more and more incorrect), and takes the center of the tracked bundles as a reference. We consider 1 frame every 3, to reduce the computation burden. If the tracking loses a bundle for some frames, the template matching phase is repeated (inside the new ROI). Finally, a dense optical flow is computed using an algorithm based on [18], for all points in the frame. Then the vector field of the optical flow is converted in polar coordinates to trace rays and rotations of groups of pixels, for each bundle and the wooden pallet. Optical flow thus retrieves information about bundles displacements and their bounces/tilting/turning. For each bundle and the wooden pallet, an approximation of the center of gravity is computed, by taking a weighted mean of all displacements centered in the center of the bounding box of the tracked object. Figure 2 shows an example of results, where colored dots are the computed centers of gravity of bundles and the wooden pallet. Developing the Simulation The second phase of the AI pipeline consists of the development of a simulation of acceleration tests and tuning of such a simulation on ‘real’ data from video recordings. Those simulations are called multi-(rigid)-body dynamics simulations, and there are many commercial and free software capable of more or less accurate reproduction of rigid bodies motion, such as AutoDesk AutoCAD [1], MathWorks Simscape Multibody [8], NVIDIA PhysX [6], and so on. Each of them differs from the others by the way it manages frictions, velocity, particles motion, using specific formulations. For our purposes we need an engine that allows the user to model wrapping envelopes, and that it is flexible enough to shape the parameters of such an envelope. Kraft paper, and in general, paper dynamics are still an open challenge for those types of engines. On the other hand, our preliminary work, aims at reproducing the dynamics of un-wrapped load units (wooden pallet and bundles) first. A closed envelope is a complex system, thus the understanding of the global system dynamics is subject to what is happening to single packages under the cover. The choice fell on an open-source multi-physics simulation engine called Project Chrono [13,40], developed by University of Wisconsin (Madison) and University of Parma, allows the positioning of some rigid bodies on a scene, along with various types of links between them. Each body can be a simple shape (e.g. a box, a sphere) and/or a user defined 3D model. Each body has a center of gravity, a mass, a moment of inertia and a collision model. Masses of pallets and bundles are easily obtainable from real measurements. Initial centers of gravity of objects depend on their shape and their initial position in the simulation. We choose to approximate bundles with boxes, so the center of gravity could be easily calculated. Then, a linear motor engine is initialized to model the sleight. Chrono has a facility to create functions for vertical and/or horizontal motion, and in our case a constant x acceleration could be modeled by imposing the ramp length and height (of the speed function), the ending time of the acceleration An AI Pipeline to Substitute Plastic w. Paper in Automatic Pallet Wrapping and the starting time of the deceleration. All such parameters could be easily obtained from the acceleration profile provided by the ESTL machine. In Chrono engine, each body has its own material properties. The ESTL machine, the wooden pallet, and each of the bundles are composed of different materials. For each object/material a value of static friction and a value of kinetic friction must be set. Since these values are unknown, we employ a machine learning method to approximate them. Input data are the extracted positions (t) (t) (centers of gravity) pi (t) = (xpi , ypi ) of each relevant object i visible in the video recordings of ESTL machine at time t. The predicted outputs are the (t) (t) (t) computed positions cj (t) = (xcj , ycj , zcj ) of all the objects in the simulation at time t. Let us note that a computed position also depends on static and kinetic friction coefficient of its material, cj (t) = cj (t, μs , μk ). Of the latter, only the visible ones should be compared to extracted data, i.e. the line of bundles in front of the camera. Being a constant position on the z axis, we consider (t) (t) only ci (t) = (xci , yci ). The objective is to minimize the distance between real position and simulated positions, for each time instant t, with a L2 loss: L2 (pi , ci ) = pi (t) − ci (t, μs , μk ) where T is the final time instant, i.e. the last frame of the video. The idea is to apply the gradient descent algorithm with (1) cost function to find μs and μk . However, input data frames are few (∼50 positions for each bundle) and also very noisy due to previous calculations (matching, tracking, optical flow detection). A statistical method to denoise data is Exponential Moving Average (EMA), that defines a novel sequence from raw data depending on the value of a parameter β ∈ (0, 1). Larger (close to 1) values of β produce smoother sequences. The machine learning method is a variation of the gradient descent algorithm, which uses EMA on gradient sequence. v0 = ∇0 L2 (pi , ci ) (2) vk = βvk−1 + (1 − β)∇k L2 (pi , ci ) where ∇k L2 (pi , ci ) is the loss gradient at step k of gradient descent. Each step moves the values of μs and μk toward the minimum of the loss function, as follows. (0) μ∗ is randomly initialized (3) (k) (k−1) μ∗ = μ∗ − ηvk where η is the learning rate of gradient descent method. In deep learning field, such a method is known as gradient descent with momentum [39]. The computer vision system and the virtual simulation are developed in Python 3.8.12. We used the Python versions of OpenCV [5] open-source library and of E. Iotti et al. the Project Chrono, PyChrono [7]. IRRLicht [4] engine renders the simulation. Both programs run on an Anaconda environment on a laptop with a 6-cores 10th gen. i7 CPU, base speed 1.61 GHz up to 3.60 GHz, and 16 GB of RAM. We choose bundles of six Coca-Cola ZeroTM and maintained the same product for all the experiments. Different products would have different shapes and dynamics, thus making it impossible to compare experiments among each other. In the future, nevertheless, we plan to extend our test to different types of products. We performed 3 single-layer tests, with accelerations 0.2 g, 0.3 g, and 0.4 g. The orientation of the load unit was LP, and the layout of products over the pallet is the first layer of a columnar one. At a later time, double-layers experiments were made. First three tests use a columnar layout. In detail, the first test has no interlayer between layers, the second includes a paper one, and the third a plastic one. Then, we tested two types of symmetric cross layouts, the first one including a deliberate fracture line that impacts negatively on load stability. Plastic and paper interlayers were also considered in combination with cross layouts, resulting in six more experiments. All double-layers experiments were performed with acceleration 0.3 g. Recordings of experiments last 8–10 s, at a rate of 20 frames per second, with 1920 × 1200 frames size. Table 1. CPU execution times of tracking algorithms on the whole video, for tests in absence of interlayers, columnar layouts and 0.3 g constant acceleration. CPU execution time [seconds] Tracker name Single-layer Double-layer Boosting MedianFlow MIL MOSSE TLD 46.719 263.781 56.484 453.312 Table 1 shows CPU execution times of the computer vision system with the different tracking algorithms, and Fig. 5 shows visual results of some of these elaborations (the faster ones). Using Boosting tracker, the retrieved centers of gravity were then passed to the machine learning algorithm to tune the simulation. Figure 6 shows two different time instants of a simulation that reproduces the behavior of a single-layer unit and LP orientation. From initial μs = 0.5 and μk = 0.5 for bundles, and μs = 0.5 and μk = 0.06 for the pallet, final values decrease to μs = 0.1, μk = 0.001 for bundles and μs = 0.15, μk = 0.04 for pallet. Figure 6 shows how the lack of z axis in parameters fitting results in a uncontrolled expansion of the columnar layer also in the z direction. An AI Pipeline to Substitute Plastic w. Paper in Automatic Pallet Wrapping Fig. 5. Example results of elaborations on video recordings, where the first column refers to Boosting, the second to KFC, and the third to MOSSE trackers. Fig. 6. Example frames of a simulation with only one layer of products, at 0.4 g acceleration. On the left, the rotation of the rightmost bundles, and on the right the expansion of the columnar layout in z direction. Conclusions and Future Works This paper proposes an AI pipeline to tackle the challenge of substituting LLDPE stretch film with Kraft paper in automatic pallet wrapping. The design of the pipeline strongly relies on the EUMOS 40509 requirements of safety for rail and road transport of packages. The key idea is to simulate acceleration tests bench according to such a regulation, to produce an automatic recommendation system for the development of wrapping formats. The first important step of the pipeline is to fit the simulation on real tests. A computer vision approach, with mixed methods, serves as an input to retain measurements of products displacements during acceleration tests with ESTL machine. Attempts with single- and double-layers settings showed good results. Handling of noisy data was addressed by using a momentum version of gradient descent, which aims at tuning the parameters of the virtual simulation. Results are promising, even if further investigations are needed, and future work will be devoted to the identification of critical points of tension which could impact the paper wrapping, the E. Iotti et al. developing of a realistic simulation of the whole envelope (with wrapping), and the use of such insight to tell final users how many wrapping layers are needed, at which heights, with what tension, and so on. Moreover, XAI techniques would be of primarily relevance in this last phase of the pipeline. References 1. Autocad: https://www.autodesk.it/solutions/simulation/overview 2. Engineering & solutions for transport & logistic nv (estl nv, https://www.estl.be). Wafelstraat 46, 8540 Deerlijk, Belgium 3. EUMOS, the European safe logistics association. quality standards. https://eumos. eu/quality-standards/. Accessed 5 Aug 2022 4. Irrlicht: https://irrlicht.sourceforge.io/ 5. Opencv: https: //opencv.org/ 6. Physx: https://github.com/NVIDIAGameWorks/PhysX 7. Pychrono: https://www.projectchrono.org/pychrono/ 8. Simscape: https://www.mathworks.com/products/simscape-multibody.html 9. European Commission: A European Strategy for Plastics in a Circular Economy 2018a (2018). https://ec.europa.eu/environment/circular-economy/pdf/plasticsstrategy-annex.pdf. Accessed 5 Aug 2022 10. European Green Deal (2019–2024). https://ec.europa.eu/info/strategy/priorities2019-2024/european-green-deal_en. Accessed 5 Aug 2022 11. European Commission. Proposal for a regulation of the European Parliament and of the council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence act) and amending certain union legislative acts (2021). https://eur-lex.europa.eu/ legal-content/EN/TXT/HTML/?uri=CELEX: 52021PC0206&from=EN. Accessed 5 Aug 2022 12. European Commission. A European approach to artificial intelligence (2022). https://digital-strategy.ec.europa.eu/en/ policies/european-approach-artificialintelligence. Accessed 5 Aug 2022 13. Anitescu, M., Tasora, A.: An iterative approach for cone complementarity problems for nonsmooth dynamics. Comput. Optim. Appl. 47(2), 207–235 (2010). https:// doi.org/10.1007/s10589-008-9223-4 14. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020) 15. Babenko, B., Yang, M.H., Belongie, S.: Visual tracking with online multiple instance learning. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 983–990. IEEE (2009) 16. Bolme, D.S., Beveridge, J.R., Draper, B.A., Lui, Y.M.: Visual object tracking using adaptive correlation filters. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2544–2550. IEEE (2010) 17. Brunetti, A., Buongiorno, D., Trotta, G.F., Bevilacqua, V.: Computer vision and deep learning techniques for pedestrian detection and tracking: a survey. Neurocomputing 300, 17–33 (2018) 18. Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Bigun, J., Gustavsson, T. (eds.) SCIA 2003. LNCS, vol. 2749, pp. 363–370. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-45103-X_50 19. Foschi, E., Bonoli, A.: The commitment of packaging industry in the framework of the European strategy for plastics in a circular economy. Adm. Sci. 9(1), 18 (2019) An AI Pipeline to Substitute Plastic w. Paper in Automatic Pallet Wrapping 20. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015) 21. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014) 22. Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. In: Bmvc. vol. 1, p. 6. Citeseer (2006) 23. Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583– 596 (2014) 24. Iman, M., Arabnia, H.R., Branchinst, R.M.: Pathways to artificial general intelligence: a brief overview of developments and ethical issues via artificial intelligence, machine learning, deep learning, and data science. In: Arabnia, H.R., Ferens, K., de la Fuente, D., Kozerenko, E.B., Olivas Varela, J.A., Tinetti, F.G. (eds.) Advances in Artificial Intelligence and Applied Cognitive Computing. TCSCI, pp. 73–87. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-70296-0_6 25. Kalal, Z., Mikolajczyk, K., Matas, J.: Forward-backward error: automatic detection of tracking failures. In: 2010 20th International Conference on Pattern Recognition, pp. 2756–2759. IEEE (2010) 26. Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2011) 27. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in neural Information Processing Systems, vol. 25 (2012) 28. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998) 29. Lukezic, A., Vojir, T., Zajc, L.C., Matas, J., Kristan, M.: Discriminative correlation filter with channel and spatial reliability. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6309–6318 (2017) 30. Marcus, G.: Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631 (2018) 31. Matthews, C., Moran, F., Jaiswal, A.K.: A review on European union’s strategy for plastics in a circular economy and its impact on food safety. J. Clean. Prod. 283, 125263 (2021) 32. Neubeck, A., Van Gool, L.: Efficient non-maximum suppression. In: 18th International Conference on Pattern Recognition (ICPR’06), vol. 3, pp. 850–855. IEEE (2006) 33. Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018) 34. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016) 35. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015) 36. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28 37. Shekhar, S.S.: Artificial intelligence in automation. Artif. Intell. 3085(06), 14–17 (2019) E. Iotti et al. 38. Sun, K., Xiao, B., Liu, D., Wang, J.: Deep high-resolution representation learning for human pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5693–5703 (2019) 39. Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning, pp. 1139–1147. PMLR (2013) 40. Tasora, A., et al.: Chrono: an open source multi-physics dynamics engine. In: Kozubek, T., Blaheta, R., Šístek, J., Rozložník, M., Čermák, M. (eds.) HPCSE 2015. LNCS, vol. 9611, pp. 19–49. Springer, Cham (2016). https://doi.org/10.1007/ 978-3-319-40361-8_2 41. Tkaczyk, S., Drozd, M., Kędzierski, Ł, Santarek, K.: Study of the stability of palletized cargo by dynamic test method performed on laboratory test bench. Sensors 21(15), 5129 (2021) 42. Wan, J., Li, X., Dai, H.N., Kusiak, A., Martínez-García, M., Li, D.: Artificialintelligence-driven customized manufacturing factory: key technologies, applications, and challenges. Proc. IEEE 109(4), 377–398 (2021). https://doi.org/10. 1109/JPROC.2020.3034808 AI Applications Transformer Based Motion In-Betweening Pavithra Sridhar(B) , V. Aananth, Madhav Aggarwal, and R. Leela Velusamy National Institute of Technology - Tiruchirappalli, Tiruchirappalli 620015, TN, India [emailprotected], [emailprotected] Abstract. In-betweening is the process of drawing transition frames between temporally-sparse keyframes to create a smooth animation sequence. This work presents a novel transformer-based in-betweening technique that serves as a tool for 3D animators. We first show that this problem can be represented as a sequence-to-sequence problem and introduce Tween Transformers - a model that synthesizes high-quality animations using temporally-sparse keyframes as input constraints. We evaluate the model’s performance via two complementary methods - quantitative and qualitative evaluation. The model is compared quantitatively with the state-of-the-art models using LaFAN1, a highquality animation dataset. Mean-squared metrics like L2P, L2Q, and NPSS are used for evaluation. Qualitatively, we provide two straightforward methods to assess the model’s output. First, we implement a custom ThreeJs-based motion visualizer to render the ground truth, input, and output sequences side by side for comparison. The visualizer renders custom sequences by specifying skeletal positions at temporally-sparse keyframes in JSON format. Second, we build a motion generator to generate custom motion sequences using the model. Keywords: Motion in-betweening LAFAN1 · Kinematics · Transformer · Realistic and accurate animation generation is an important but challenging problem with many applications, including animating 3D characters in films, real-time character motion synthesis in Video Games, and Educational applications. One widely used method to generate animations is motion inbetweening, commonly known as tweening. It generates intermediate frames called in-betweens between two temporally sparse keyframes to deliver an illusion of movement by smoothly transitioning from one position to another. In traditional animation pipelines, animators manually draw motion frames between a set of still keyframes indicative of the most critical positions the body must be at during its motion sequence. Recent improvements include Motion Capture (MOCAP) technologies [9] and query-based methods [15,19] to generate animations. However, MOCAP technology is expensive, and human-drawn animations are preferred. With the rise of computer-aided animation, deep learningbased algorithms have enabled the smooth generation of keyframes from sparse c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 299–312, 2023. https://doi.org/10.1007/978-3-031-27181-6_21 P. Sridhar et al. frames by learning from large-scale motion capture data. Existing models currently use Recurrent Neural Networks (RNNs) [7,10], Long Short Term Memory Networks (LSTMs) [8], and BERT-based models [3,4]. The complexity of generating character animations includes 1. Replicating complex human behavior to create realistic characters 2. Predominantly used transition generation methods are either expensive or inefficient 3. RNNs/LSTMs, though they can capture long-term dependencies, cannot be parallelized due to the sequential processing of input, resulting in longer training times 4. RNNs/ LSTMs do not support transfer learning making it hard to use pretrained models Inspired by the concept of self-attention to capture long-term dependencies, this paper proposes a transformer-based model to generate realistic animation sequences. Model generalization constitutes the main effort this framework puts into improving the performance of machine learning predictions. This would be analogous to large text transformer models like GPT-3 [2]. This work not only eases the effort put in by the animators but also helps researchers by unblocking transfer learning for the task of in-betweening, thus introducing a level of generalization into the model. Overall, the contributions in this paper can be summarized as follows:1 1. Represent motion in-betweening as a sequence to sequence problem where the input sequence consists of keyframes and the output sequence represents the complete and smoothed motion sequence. 2. Set a baseline for the input sequence by filling the frames between the keyframes with interpolated values. 3. Experiment with the efficiency and viability of using transformers to achieve sequence to sequence translation for human motion and compare them with the existing results. 4. Evaluate the model against other state-of-the-art models [4,8,16] for the same task using L2P, L2Q, and NPSS metrics. 5. Build a visualizer and a motion generator that qualitatively evaluates the output of the model in comparison to the ground truth and input sequences. Related Work The problem is analogous to machine translation, where sequence-to-sequence (seq2seq) architectures are prevalent [1,18,21]. “Encoder-only” models like BERT [3] are designed to learn the context of a word based on all its surroundings (left and right of the word), making them suitable for feature extraction, sentiment classification, or span prediction tasks but not for generative tasks like 1 Code can be found in https://github.com/Pavi114/motion-completion-usingtransformers. Transformer Based Motion In-Betweening translation or sequence completion. The pre-training objectives used by encoderdecoder transformers like T5 [17] include a fill-in-the-blank task where the model predicts missing words within a corrupted piece of text that is analogous to inbetweening when motion sequences replace sentences. Early works in human motion prediction include using Conditional Restricted Boltzmann Machines (RBMs) [20] to encode the sequence information in latent variables and predict using decoders. More recently, many RNN-based approaches like Encoder-Recurrent-Decoder (ERD) networks [5] propose separating spatial encoding and decoding from the temporal dependencies. Other recent approaches investigate new architectures like transformers [13] and loss functions to improve human motion prediction further [6,12]. Initial approaches in motion in-betweening focused on generating missing frames by integrating keyframe information with spacetime models [23]. The following widely successful method for in-betweening adopted a probabilistic approach, framing it as a Maximum A posterior Optimization problem (MAP) [14], dynamical Gaussian process model [22], or Markov models with dynamic auto-regressive forests [11]. The latest deep learning approaches include works by Holden et al. [10], and Harvey et al. [7] and helped RNNs dominate this field. The latest work using RNN focuses on augmenting a Long Short Term Memory(LSTM) based architecture with time-to-arrival embeddings and a scheduled target noise vector, allowing the system to be robust to target distortions [8]. Some recent work includes BERT-based encoder-only models [3,4] that predict the entire sequence in one pass and deep learning approaches for interpolation [16]. However, BERT-based models will be less effective than encoder-decoder models for generative tasks. The following sections detail the model architecture, Tween Transformers, to perform motion frame completion similar to sentence completion. 3.1 Tween Transformers (TWTR) The architecture of Tween Transformers (TWTR) consists of four main components: 1. Input masking module 2. Input encoding neural network that encodes each motion sequence and converts the input to a set of sequential tokens 3. Transition generation network that includes a standard transformer comprising encoder and decoder modules with feed-forward and multi-head attention networks. 4. Output decoding neural network that computes a sequence of character motion. P. Sridhar et al. Fig. 1. Model architecture of TWTR While the transition generation module learns the temporal dependencies, the input and output encoding networks aim to learn spatial dependencies between the different body joints for encoding and decoding motion sequences. Finally, the model also uses multiple losses, including forward kinematics loss, to improve the realism of the generated sequences. It is assumed that the input has both position (x, y, z) and orientation (q0, q1, q2, q3) variables. Therefore, a single pose can be defined with a root position coordinate P ∈ R3 and a quaternion matrix Q ∈ RJ×4 , where J represents the joint number of the input pose (here, 22). The following sections discuss the model’s architecture in detail, as indicated in Fig. 1. Input Masking. There are multiple keyframe gaps k specified in the model configuration. The frames belonging to the keyframe gap are filled with interpolated values derived from the frames constituting the two ends of the keyframe gap. Two kinds of interpolations are carried out and compared. They are implemented in the following ways: – positions and rotations are linearly interpolated – positions are linearly interpolated while rotations are spherically Transformer Based Motion In-Betweening Input Encoding. As seen in Fig. 1, model encoding has three modules - Input Sequence Encoding, Positional Encoding, and Keyframe Embedding. 1. Input Sequence Encoding: The input sequence encoder network is a set of three Linear encoders fully connected to two-layer Feed-Forward Networks (FFN) with ReLU activations. The input sequence encoder takes in the global root position root p, local quaternions q, and global root velocity root v and outputs a set of “sequential tokens”. The hidden sizes of the FFNs are 16, 8, and 8 for q, root p, and root v, respectively. The embedding hyperparameter defines the output sizes of the FFNs. The outputs from the FFNs are concatenated to form the output of the input sequence encoding network. Equation (1) describes the Linear Encoder, and Eq. (2) describes the Input Sequence Encoder. L(x) = Linear(ReLU(Linear(x))) I(root p, root v, q) = Lp (root p) Lv (root v) Lq (q1 ) ... Lq (qJ ) where root p ∈ R3 , root v ∈ R3 , qi ∈ R4 , I denotes the Input Sequence Encoder, and L denotes the Linear Encoder. 2. Positional Encoding: Positional encoding, a popular method introduced by Vaswani et al. [21], involves adding a set of predefined sinusoidal and cosine signals to introduce temporal knowledge to the transformer model. The positional encoding for source Zs = [ztta,2i ] and target Zt = [ztta,2i ] is computed using Eq. (3) tta ) basis2i/d (3) tta ) ztta,2i+1 = cos( basis2i/d where tta is the number of timesteps until arrival and the basis component influences the rate of change in frequencies along the embedding dimension d. A basis of 10,000 is used. 3. Keyframe Embedding: Following previous works [4], the model incorporates additive keyframe embeddings. The keyframe embeddings Ekf classify the frames in the sequence into keyframes, unknown frames, and ignored frames. They’re represented by learnable embedding vectors {ˆ e0 , eˆ1 , eˆ2 } respectively. e0 , eˆ1 , eˆ2 } The keyframe embeddings are represented by Eq. (4), where etkf ∈ {ˆ and T is the sequence length. The embeddings are added to the input sequence, similar to positional encodings. ztta,2i = sin( Ekf = [e1kf , e2kf , ..., eTkf ] P. Sridhar et al. Transformer. A transformer consists of multiple encoder and decoder layers. Each encoder includes a multi-head self-attention layer (MHSA) and a feedforward network (FFN), and each decoder consists of a masked multi-head selfattention layer (MMHSA), multi-head attention layer (MHA) and a feed-forward network. The attention function leveraged in the transformer maps a query and a set of key-value pairs - all vectors - to an output. The processing of a single attention head can be represented as follows: QK T Attention(Q, K, V ) = Sof tmax( √ )V dk where Q = Wq A represents a query matrix, K = Wk A represents a key matrix, and V = Wv A represents a value matrix. Wq , Wk , and Wv are the corresponding weight matrices, and dk represents the dimension of the key matrix. The Query matrix can be interpreted as the keyframe for which Attention is calculated. The Key and Value matrices represent the keyframes that are “attended to”, i.e., how relevant that keyframe is to the query keyframe. In MMHSA, the target is masked before applying the attention mechanism. All the attention outputs are concatenated and sent to the FFN. Output Decoding. The decoder takes in the concatenated “sequential tokens” outputted by the Input Sequence Encoder and outputs the global root position root p, local quaternions q, and global root velocity root v. To reverse engineer the spatial dependencies, each of the three FFNs, one for each output, comprises two linear layers with ReLU activation. The hidden sizes of the FFNs are the same as in the Input Sequence Encoder, and the output sizes are defined by the original dimensions of the three parameters. Equation (6) describes the Output Decoder. O(x) = (Lp (x[: dp ]), Lv (x[dp : dp + dv ), ⎤ Lq (x [ dp + dv : dp + dv + dq ]) ⎢ ⎥ Lq (x [ dp + dv + dq : dp + dv + 2 × dq ] ⎥ Q=⎢ ⎣ ⎦ ... Lq (x [ dp + dv + (J − 1) × dq : dp + dv + J × dq ] where dp , dv , and dq are embedding dimensions for p, v, and q. x[i : j] represents a tensor containing the values in x from the ith index to the (j − 1)th index. J denotes the number of joints in the skeleton, Q ∈ RJ×4 denotes the tensor of stacked quaternions, O denotes the Output Decoder, and L denotes the Linear Encoder. 3.2 Loss Computation Given a collection of predicted motion sequences and the ground truth, inbetweening loss is computed as the scaled sum of two individual losses - Reconstruction loss and Forward Kinematics (FK) loss. Transformer Based Motion In-Betweening L = αr LR + αf k LF K where αr and αF K are constants to balance the disparity of individual losses. For training we use αr = 100 and αF K = 1. Reconstruction Loss LR . Reconstruction loss evaluates the ability of the model to “reconstruct” the target sequence from the input sequence. Reconstruction loss accounts for the difference in output and target quaternions values and is computed using an L1 norm. While Harvey et al. [8] compute and sum reconstruction losses for q, x, and contacts, they acknowledge that the most important component is q. Reconstruction loss is computed using Eq. (8). LR = N −1 T −1 1 t qˆ − qnt N T n=0 t=0 n where qˆnt is the rotational quaternion of the predicted motion sequence n at time t. q refers to the ground truth quaternion. N refers to the number of sequences, and T refers to the length of each motion sequence. Forward Kinematics Loss LF K . Forward Kinematics loss compares the difference in the global positions of joints between the ground truth and the model’s output. Forward Kinematics loss evaluates the ability of the model to “understand” the relationships between relative angles and global positions. Although the offsets of various joints in the skeleton are not provided to the model, it learns to respect human geometry and maintain correct posture by minimizing the Forward Kinematics loss. The Forward Kinematics loss is computed using Eq. (9). pglobal − pglobal ||1 + ||ˆ qglobal − qglobal ||1 LF K = ||ˆ where pˆglobal and qˆglobal can be derived from the local coordinates using Forward Kinematics F K(ˆ plocal , qˆlocal ) and, similarly pglobal and qglobal can be derived from the local coordinates using Forward Kinematics F K(plocal , qlocal ). 3.3 Following previous works [8,16], the entire dataset was split into windows of maximum length Tmax = 65. To construct each batch, the number of start keyframes are set to 10 and the number of end keyframes to 1. The number of in-between frames is sampled from the range [5, 44] without replacement. The weight associated with the number of in-between frames nin is set to be inversely proportional to it, wnin = n1in . This prevents overfitting on the windows with a large number of in-between frames. Shorter windows are sampled more often as they are more abundant and hence harder to overfit. Therefore, the number of unique non-overlapping sequences of a given total length 10 + 1 + nin is approximately inversely proportional to nin . Finally, given the total sampled sequence length, the sequence start index is sampled uniformly at random in the range [0, Tmax − (1 + 10 + nin )]. P. Sridhar et al. Fig. 2. Stills from the Ground Truth, LERP, Model Output, and Smoothed Output sequences at different timestamps for the action “Aiming2” performed by subject “Subject5”. Considering the frames at t = 20, it is clear that the output produced by our model resembles the ground truth more than the interpolated sequence. 4 4.1 Setup and Experimental Results Dataset The publicly available Ubisoft La Forge Animation (LaFAN1) Dataset was used for all the experiments. Introduced by Harvey et al. [8] in Ubisoft, LaFAN1 consists of general motion capture clips in high definition. The motion sequences are in BVH format. The LaFAN1 dataset comprises five subjects, 77 sequences, and 496,672 motion frames at 30 fps for a total of 4.6 h. There are around 15 themes, from everyday actions like walking, sprinting, and falling to uncommon actions like crawling, aiming, and a few sports movements. Similar to other works [4,8,16], all sequences of subject five were used for testing and benchmarking, with the remaining used for training. 4.2 Evaluation Metrics The model is evaluated against the L2P, L2Q, and NPSS metrics used in previous studies on the subject five sequences of the LAFAN1 dataset. The L2P defines the average L2 distances of the positions between the predicted motion sequence and the ground truth sequence. Equation 10 shows the L2P calculation. Similarly, the L2Q defines the average L2 distances of the global quaternions. A combination of local quaternions, positions, and motion sequence properties is used to compute these metrics. Equation 11 shows the L2Q calculation. N −1 T −1 1 t pˆ − pn t L2P = N T n=0 t=0 n Transformer Based Motion In-Betweening Fig. 3. Stills from the Ground Truth, LERP, Model Output, and Smoothed Output sequences at different timestamps for the action “Dance2” performed by subject “Subject5”. The dance action is unconventional and full of seemingly random movements. Considering the frames at t = 10, t = 20, and t = 30, the output produced by the model is better at t = 10, the output produced by interpolation is better at t = 20, and neither come close at t = 30. N −1 T −1 1 t L2Q = qˆ − qn t N T n=0 t=0 n where qˆ is the rotational quaternion of the predicted motion sequence n at time t. q refers to the ground truth quaternion. Similarly, pˆ refers to the position of the predicted motion sequence p refers to the ground truth position. N refers to the number of sequences, and T refers to the length of each motion sequence. Normalized Power Spectrum Similarity (NPSS) is an approach comparing angular frequencies with the ground truth. It is an Earth Mover Distance (EMD) based metric over the power spectrum, which uses the squared magnitude spectrum values of the Discrete Fourier Transform coefficients. Equation (12) computes the NPSS metric. N −1 T −1 N P SS = j=0 wi,j ∗ emdi,j N −1 T −1 i=0 j=0 wi,j where emdi,j refers to the EMD distance, and wi,j refers to the weights. Harvey et al. [8] state that the L2P metric is a better metric than any angular loss for assessing the visual quality of transitions with global displacements as it helps us weigh the positions of the bones and joints. Hence, they argue that L2P is a much more critical metric than L2Q and NPSS. P. Sridhar et al. Fig. 4. Still from the motion generator Data Preprocessing First, the local position and orientation values from the BVH files provided in the LaFAN1 dataset [7] are extracted. Twenty-two joints are considered for the skeleton model. Forward Kinematics was used to compute the absolute positions of each joint from the relative positions (relative to hip) given in the dataset. Positions are modeled as standard matrices, and orientations are modeled using quaternions. Further, global position and root velocity are computed from local positions using Forward kinematics. 4.4 Most hyperparameters from previous baselines are retained to show the relative improvement in performance using Transformers. This study presents a novel hyperparameter comparison using different interpolation techniques - Linear and Spherical, to compare the performance of several baseline studies. A batch size of 64 for 100 epochs was used. Adam optimizer with a learning rate of 10−4 along with a constant dropout of 0.2 was utilized. Keyframe gaps of 5, 15, and 30 were tested to compare the performance of the transformer over higher frame gaps. 4.5 Visualizer and Motion Generator To qualitatively evaluate the model, a visualizer was built using Node and ThreeJs that juxtaposed the ground truth, interpolated sequence, output sequence, and a smoothed output sequence of the transformer model. The model’s output is stored in JSON format and rendered using a custom webbased visualizer. The visualizer was built from scratch using Typescript, NodeJs, Express, and ThreeJs. Figures 2 and 3 show a sample output of the model generated using the visualizer. Further, the motion generator was built using Python, Transformer Based Motion In-Betweening Fig. 5. (a) Comparision of model performance at keyframe gap = 30 with three commonly used metrics - L2P, L2Q, and NPSS, (b) Comparison of L2P losses at various keyframe gaps of the motion in-betweening methods included in this study, (c) Comparison of NPSS losses at various keyframe gaps of the motion in-betweening methods included in this study, (d) Comparison of L2Q losses at various keyframe gaps of the motion in-betweening methods included in this study. Flask, Node, and ThreeJs using the visualizer module as a base. The motion generator allows a user to modify keyframes in a given motion sequence and generate in-between frames for the same. The plugin consists of a backend Flask server that uses an instance of our model to generate the in-between frames. Figure 4 shows a still from the motion generator where the stick model is animating a generated custom motion sequence. 4.6 As expected, SLERP performs better than LERP. However, it is observed that the performance at 30 fps is almost comparable, as seen in Fig. 5a. This is because the spherical motion becomes almost linear for very short timescales. As seen in Table 1, it is inferred that the Tween Transformer model outperforms the interpolation model and performs closely with the baseline models. Figures 5b, 5d, and 5c confirm that Tween Transformers follow a similar trend to that of P. Sridhar et al. Table 1. The Tween Transformer model is compared with baseline Motion Inbetweening methods using L2P, L2Q, and NPSS metrics for various sequence lengths. The Interpolation based methods are included as part of the study. TT (Ours) refers to the Tween Transformer model. Length Zero Velocity SLERP TGrec TGcomplete SSMCTlocal SSMCTGlobal Δ-Interpolator TT (Ours) L2Q 5 15 L2P 5 15 NPSS 5 0.56 0.22 0.21 0.17 0.17 0.14 0.11 0.16 1.51 0.98 0.83 0.69 0.71 0.61 0.57 0.65 1.52 0.37 0.32 0.23 0.23 0.22 0.13 0.21 6.60 2.32 1.82 1.28 1.37 1.1 1.00 1.21 0.0053 0.0023 0.0025 0.0020 0.0019 0.0016 0.0014 0.0019 0.0522 0.0391 0.0304 0.0258 0.0291 0.0234 0.0217 0.0261 0.2318 0.2013 0.1608 0.1328 0.143 0.1222 0.1217 0.1358 1.10 0.62 0.48 0.42 0.44 0.36 0.32 0.39 3.69 1.25 0.85 0.65 0.74 0.56 0.47 0.59 other models. Experiments show that training is crucial to obtain a visually smooth output. Moving Average Smoothing was observed to have minimal effect on the output sequence as the model trains. This work presents the Tween Transformer, a novel, robust, transformer-based motion in-betweening technique that serves as a tool for 3D animators and overcomes the challenges faced by existing RNN-based models [8,16], including sequential training, capturing long-term dependencies, and transfer learning. The generic model treats the application of in-betweening as a sequenceto-sequence problem and solves it using a transformer-based encoder-decoder architecture. It unboxes the potential of robust Transformer-based models for motion in-betweening applications. To conclude, the results encourage the application of low-resource cost-efficient models and enable further developments with the scope of transfer learning on the generalized implementation. References 1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015). https://arxiv.org/abs/1409.0473 2. Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901. Curran Associates, Inc. (2020). https:// proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f6 4a-Paper.pdf Transformer Based Motion In-Betweening 3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1423, https://aclanthology. org/N19-1423 4. Duan, Y., et al.: Single-shot motion completion with transformer. arXiv preprint arXiv:2103.00776 (2021) 5. Fragkiadaki, K., Levine, S., Malik, J.: Recurrent network models for kinematic tracking. CoRR abs/1508.00271 (2015). https://arxiv.org/abs/1508.00271 6. Gopalakrishnan, A., Mali, A.A., Kifer, D., Giles, C.L., II, A.G.O.: A neural temporal model for human motion prediction. CoRR abs/1809.03036 (2018). https:// arxiv.org/abs/1809.03036 7. Harvey, F.G., Pal, C.: Recurrent transition networks for character locomotion. In: SIGGRAPH Asia 2018 Technical Briefs. SA 2018, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3283254.3283277 8. Harvey, F.G., Yurick, M., Nowrouzezahrai, D., Pal, C.: Robust motion inbetweening. ACM Trans. Graph. 39(4), 1–12 (2020). https://doi.org/10.1145/ 3386569.3392480 9. Holden, D.: Robust solving of optical motion capture data by denoising. ACM Trans. Graph. 37(4), 1–12 (2018). https://doi.org/10.1145/3197517.3201302 10. Holden, D., Saito, J., Komura, T.: A deep learning framework for character motion synthesis and editing. ACM Trans. Graph. 35(4), 1–11 (2016). https://doi.org/10. 1145/2897824.2925975 11. Lehrmann, A.M., Gehler, P.V., Nowozin, S.: Efficient nonlinear Markov models for human motion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014) 12. Liu, Z., et al.: Towards natural and accurate future motion prediction of humans and animals. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9996–10004 (2019). https://doi.org/10.1109/CVPR. 2019.01024 ´ Villamizar, M., Odobez, J.: Pose transformers (POTR): 13. Mart´ınez-Gonz´ alez, A., human motion prediction with non-autoregressive transformers. CoRR abs/2109.07531 (2021). https://arxiv.org/abs/2109.07531 14. Min, J., Chen, Y.L., Chai, J.: Interactive generation of human animation with deformable motion models. ACM Trans. Graph. 29(1), 1–12 (2009). https://doi. org/10.1145/1640443.1640452 15. M¨ uller, M., R¨ oder, T., Clausen, M.: Efficient content-based retrieval of motion capture data. ACM Trans. Graph. 24(3), 677–685 (2005). https://doi.org/10.1145/ 1073204.1073247 16. Oreshkin, B.N., Valkanas, A., Harvey, F.G., M´enard, L.S., Bocquelet, F., Coates, M.J.: Motion Inbetweening via Deep Δ-Interpolator. arXiv e-prints arXiv:2201.06701 (2022) 17. Dhariwal, P., Sastry, G., McCandlish, S.: Enct5: Fine-tuning t5 encoder for discriminative tasks (2021) 18. Ren, M., Kiros, R., Zemel, R.S.: Exploring models and data for image question answering. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 2, pp. 2953–2961. NIPS 2015, MIT Press, Cambridge, MA, USA (2015) P. Sridhar et al. 19. Tanuwijaya, S., Ohno, Y.: TF-DF indexing for mocap data segments in measuring relevance based on textual search queries. Vis. Comput. 26(6–8), 1091–1100 (2010). https://doi.org/10.1007/ s00371-010-0463-9 20. Taylor, G.W., Hinton, G.E.: Factored conditional restricted Boltzmann machines for modeling motion style. In: Proceedings of the 26th Annual International Conference on Machine Learning ICML 2009, pp. 1025–1032. Association for Computing Machinery, New York, NY, USA (2009). https://doi.org/10.1145/1553374.1553505 21. Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems NIPS 2017, pp. 6000–6010. Curran Associates Inc., Red Hook, NY, USA (2017) 22. Wang, J.M., Fleet, D.J., Hertzmann, A.: Gaussian process dynamical models for human motion. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 283–298 (2008). https://doi.org/10.1109/TPAMI.2007.1167 23. Witkin, A., Kass, M.: Spacetime constraints. In: Proceedings of the 15th Annual Conference on Computer Graphics and Interactive Techniques SIGGRAPH 1988, pp. 159–168. Association for Computing Machinery, New York, NY, USA (1988). https://doi.org/10.1145/54852.378507 A Logic-Based Tool for Dynamic Generation and Classification of Musical Content Antonio Lieto(B) , Gian Luca Pozzato(B) , Alberto Valese, and Mattia Zito Dipartimento di Informatica, Università di Torino, Turin, Italy {antonio.lieto,gianluca.pozzato,alberto.valese}@unito.it, [emailprotected] Abstract. In this work we present NERVOUS, an intelligent recommender system exploiting a probabilistic extension of a Description Logic of typicality to dynamically generate novel contents in AllMusic, a comprehensive and in-depth resource about music, providing data about albums, bands, musicians and songs (https://www.allmusic.com). The tool can be used for both the generation of novel music genres and styles, described by a set of typical properties characterizing them, and the reclassification of the available songs within such new genres. 1 Introduction The ability of generating new knowledge via conceptual combination concerns highlevel capacities associated to creative thinking and problem solving, and it represents an open challenge for artificial intelligence [2]. Indeed, dealing with this problem requires, from an AI perspective, the harmonization of two conflicting requirements: on the one hand, the need of a syntactic and semantic compositionality; on the other hand, the need of capturing typicality effects. However, such requirements can be hardly accommodated in standard symbolic systems, including formal ontologies [4]. According to a well-known argument [18], prototypes, namely commonsense conceptual representations based on typical properties, are not compositional. Consider a concept like pet fish: it results from the composition of the concept pet and of the concept fish, however, the prototype of pet fish cannot result from the composition of the prototypes of a pet and a fish. For instance, a typical pet is furry, whereas a typical fish is grayish, but a typical pet fish is neither furry nor grayish (typically, it is red). This is a paradigmatic example of the difficulty to address when building formalisms and systems trying to imitate this combinatorial human ability. Examples of such difficulties concern handling exceptions to attribute inheritance and handling the possible inconsistencies arising between conflicting properties of the concepts to be combined. In this work we continue our activity started in [9, 10] with the definition of a Typicality Description Logic for concept combination (TCL , typicality-based compositional logic), that we have exploited in order to build a goal-oriented framework for knowledge invention in the cognitive architecture of SOAR [8, 11, 12], as well as for the generation and the suggestion of novel editorial content in multimedia broadcasting [3] and in the artistic domain of paintings, poetic content [15], and museum items [13]. In the Description Logic TCL , “typical” properties can be directly specified by means of c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 313–326, 2023. https://doi.org/10.1007/978-3-031-27181-6_22 A. Lieto et al. a “typicality” operator T enriching the underlying DL, and a TBox can contain inclusions of the form T(C) D to represent that “typical Cs are also Ds”. As a difference with standard DLs, in the logic TCL one can consistently express exceptions and reason about defeasible inheritance as well. Typicality inclusions are also equipped by a real number p ∈ (0.5, 1] representing the probability/degree of belief in such a typical property: this allows us to define a semantics inspired to the DISPONTE semantics [20] characterizing probabilistic extensions of DLs, which in turn is used in order to describe different scenarios where only some typicality properties are considered. Given a KB containing the description of two concepts CH and CM occurring in it, we then consider only some scenarios in order to define a revised knowledge base, enriched by typical properties of the combined concept C CH CM by also implementing a HEAD/MODIFIER heuristics coming from the cognitive semantics. In this work we exploit the logic TCL in order to dynamically generate novel knowledge by means of a mechanism for commonsense combination, that we apply to data extracted from AllMusic (https://www.allmusic.com), a comprehensive and in-depth resource about music. In particular, we introduce NERVOUS (dyNamic gEneratoR of noVel cOntent in mUSic), a tool which is able to compute the following activities: – it builds the prototypical description of 18 basic musical genres (Blues, Classical, Country, Easy Listening, Holiday and so on), by extracting data about musical genres and songs in AllMusic by means of a crawler. Such prototypes are formalized by means of a TCL knowledge base, whose TBox contains both rigid inclusions of the form BasicGenre Concept in order to express essential desiderata but also constraints, for instance Childrens ¬Sex (due to law restrictions, sexual contents for kids are forbidden), as well as prototypical properties of the form p :: T (BasicGenre) TypicalConcept, representing typical concepts of a given genre, where p is a real number in the range (0.5, 1], expressing the degree of belief of such a concept in items belonging to that genre: for instance, 0.84 :: T(AvantGarde) Cerebral is used to express that typical songs belonging to the Avant-garde genre are Cerebral (in some sense) with a probability/degree of belief of the 84%, and such a degree is automatically extracted by NERVOUS from the data available on AllMusic for that genre; – it allows the generation of new musical genres by exploiting the reasoning capabilities of the logic TCL in order to generate new derived genres as the result of the creative combination of two basic or derived ones; – it implements a mechanism of reclassification of the available songs of AllMusic within new genres generated in the previous phase. Intuitively, a song is classified as belonging to the new genre if its moods and themes match the typical properties of the prototype of such a genre, obtaining a score of compatibility higher than 0. A positive matching, namely the same property has a high score in the song and is a typical property in the genre, provides a positive score, whereas a negative one, e.g. the song has a high score for a property which is negated in the prototype of A Logic-Based Tool for Dynamic Generation and Classification of Musical Content the genre, produces a negative score. Songs having at least one positive match and having no negative ones has an overall positive score and is then recommended by NERVOUS for that genre. We have tested NERVOUS by reclassifying the available songs in the highlights of AllMusic with respect to the new generated genres, as well as with an evaluation, in the form of a controlled user study experiment, of the feasibility of using the obtained reclassifications as recommended contents. The obtained results are encouraging and pave the way to many possible further improvements and research directions. 2 Combining Concepts: The Description Logic TCL The tool NERVOUS exploits the Description Logic TCL [9, 10] for the generation of new genres as the combination of two existing ones. The language of TCL extends the basic DL ALC by typicality inclusions of the form p :: T(C) D where p ∈ (0.5, 1] is a real number representing its degree of belief, whose meaning is that “we believe with degree p that, normally, Cs are also Ds”. We avoid degrees p ≤ 0.5 since it would be misleading for typicality inclusions, since typical knowledge is known to come with a low degree of uncertainty. We define a knowledge base K = R, T , A where R is a finite set of rigid properties of the form C D, T is a finite set of typicality properties of the form p :: T(C) D where p ∈ (0.5, 1] ⊆ R is the degree of belief of the typicality inclusion, and A is the ABox, i.e. a finite set of formulas of the form either C(a) or R(a, b), where a, b ∈ O and R ∈ R. The Description Logic TCL relies on the DL of typicality ALC + TR introduced in [5], which allows one to describe the prototype of a concept, in this case a musical genre. As a difference with standard DLs, in the logic ALC + TR one can consistently express exceptions and reason about defeasible inheritance as well. For instance, a knowledge base can consistently express that “typical students are young persons”, whereas “normally, senior students are not young persons” by T(Student) Young and T(SeniorStudent) ¬Young, given a knowledge base also containing the standard inclusion SeniorStudent Student, representing that all senior students are students. The semantics of the T operator is characterized by the properties of rational logic [7], recognized as the core properties of nonmonotonic reasoning. The Description Logic ALC + TR is characterized by a minimal model semantics corresponding to an extension to DLs of a notion of rational closure as defined in [7] for propositional logic: the idea is to adopt a preference relation among ALC + TR models, where intuitively a model is preferred to another one if it contains less exceptional elements, as well as a notion of minimal entailment restricted to models that are minimal with respect to such preference relation. As a consequence, the operator T inherits well-established properties like specificity and irrelevance; in the example, the Description Logic ALC + TR allows one to infer that T(Student Italian) Young (being Italian is irrelevant with A. Lieto et al. respect to being young) and, if one knows that Rachel is a typical senior student, to infer that she is not young, giving preference to the most specific information. A model M of TCL extends standard ALC models by a preference relation among domain elements as in the logic of typicality [5]. In this respect, x < y means that x is “more normal” than y, and that the typical members of a concept C are the minimal elements of C with respect to this relation. An element x ∈ ΔI is a typical instance of some concept C if x ∈ C I and there is no C-element in ΔI more normal than x. Definition 1 (Model of TCL ). A model M is any structure ΔI , 0. This song will be then recommended by NERVOUS, as it can be seen in Fig. 2, where a picture of NERVOUS’s interface is shown. It is worth noticing that, in order to provide a “white-box” recommender system, each recommended song is equipped by an explanation, relying on the pipeline implemented by system of concept combination. Let us conclude this section by observing that the fact that a recommended song belongs to both original, basic genres that have been combined is far from being obvious: indeed, the system NERVOUS suggests also the song “Moanin” by Art Blakey & the Jazz Messengers, which is classified by AllMusic as belonging to the genre Jazz. In our opinion, this is a further interesting mechanism providing the required component of surprise in the recommendation, justified by the fact that the description of the song matches the one of the novel genre, the last one only partially inheriting properties from the basic genres whose combination lead to such a new genre. The tool NERVOUS is available at https://github.com/Mattia98779/Nervous. A preliminary version of a web interface is available at https://mattia98779.github.io/#/: by means of such a web interface, a user can select two basic genres and then obtain the list of suggested songs, together with an explanation. 6 Evaluation and Discussion In this section we provide a preliminary evaluation of our tool NERVOUS. We have tested it in two different ways. The first evaluation is completely automatic and inheres the capability of the system of generating novel hybrid genres that are able to be populated by the original content of the AllMusic platform via a re-classification mechanism involving the 599 songs of the platform. In this case, the success criterion concerns the avoidance of the creation of empty boxes corresponding to the new generated combined genres. More in detail, at least 69 songs are re-classified by the tool NERVOUS for each derived music genre (the second genre containing “few” songs contains 138 A Logic-Based Tool for Dynamic Generation and Classification of Musical Content Fig. 3. Some statistics about the re-classification of NERVOUS. items), with an average of 307 songs per derived genre. This is summarized in Fig. 3, picture in the left, whereas from the picture on the right we can observe that only 7 out of 599 songs on AllMusic (with very few attributes) are not re-classified in any genre by the system, whereas all the other ones (98.83%) are re-classified in at least one genre. The second evaluation consisted in a user study involving 22 persons (11 females, 11 males, aged 14–72) that evaluated a total of 260 recommendations generated by the system. It is worth observing that this is one of the most commonly used methodology for the evaluation of recommender systems based on controlled small groups analysis [22]. The idea was to estimate the satisfaction of the potential users of the platform when exposed to the contents of the novel categories suggested by NERVOUS: all the participants were voluntary people using an availability sampling strategy. Participants were all naive to the experimental procedure and to the aims of the study. This evaluation was carried out as a classical “one to one” lab controlled experiment (i.e. one person at time with one expert interviewer) and we adopted a thinking aloud protocol, consisting in recording the verbal explanations provided by the people while executing a given laboratory task [16, 17]. In this setting, the users had to start the interview by indicating a couple of preferred genres among those available in AllMusic. This selection triggered both the activation of a novel hybrid prototypical genre by NERVOUS and the corresponding reclassification of the AllMusic songs based on such selection. The output of the system, pruned to show the top 10 best results, was then evaluated with a 1–10 voting scale expressing the satisfaction of the received recommendations. The results we have obtained seem promising: the average score assigned by the users to the recommendations of the reclassified elements is 7.44 out of 10. This score was calculated by considering, for each new category, the score assigned to the top 10 reclassified songs, since they were provided, to the users, as recommendations for the novel genres. It is worth observing that, in few cases, the creative classification performed by the tool NERVOUS has lead to counter-intuitive results. As an example, the song “I’m eighteen” by Alice Cooper, known as “The Godfather of Shock Rock”, is classified as belonging to the derived genre result of the combination between Rap and Avant-garde. We strongly conjecture that these situations could be easily avoided by introducing constraints on some genres by means of rigid negated properties. A. Lieto et al. Furthermore, most of the people we have interviewed observed that AllMusic adopts a debatable choice of basic genres, in particular concerning the fact that Pop and Rock, two of the most popular music genres in the world, are grouped in a single category. This immediately implies some difficulties in combining its prototype with the one of another basic genre. Moreover, some of the (low ranked) items corresponded to old songs. This follows immediately from the fact that few recent songs belong to the highlights of AllMusic, since they have received a lower number of scores by the portal’s users. Notably the first two of the above mentioned issues are not directly related to NERVOUS, since: i) the system can not know if the association description/item is coherent, but it just provides (for the recommended output) the correspondence already in place in AllMusic; ii) the recommendations of old editorial contents is based on the actual dataset of AllMusic (collecting about six hundred songs). This element can be overcome by simply adding an additional filter about the period preferences of the users. 7 Conclusions and Future Works In this work we have presented NERVOUS, a knowledge-based system for the dynamic generation of novel contents about music, exploiting the reasoning mechanism of the logic TCL in order to generate, reclassify and suggest novel content genres in the context of AllMusic, an online platform collecting in-depth information about music genres, albums, musicians and songs. The core component of the system NERVOUS relies on CoCoS, a tool for combining concepts in the logic TCL . According to [23] recommender systems “try to identify the need and preferences of users, filter the huge collection of data accordingly and present the best suited option before the users by using some well-defined mechanism”. The literature is rich of proposals, that we can partition in three main groups of recommender systems: – collaborative filtering, which exploits similarities of usage patterns among mutually similar users; – content-based filtering, which exploits content similarity; – hybrid filtering, which combines the two approaches. It is easy to observe that the tool NERVOUS could be considered an hybrid recommender system, since in its current form it makes use of content description as the input. However, it differs from the state of the art approaches since it exploits the reasoning power of a logic framework capable of representing new intuitive principles influencing user preferences and usage attitudes which cannot be derived from the pure analysis of content and/or the comparison of similar users. The system NERVOUS has been tested in a twofold evaluation showing promising results for both the automatic evaluation and the user acceptability of the recommended items. With evaluation results at hand, we can observe that NERVOUS represents a good approach at addressing the very well known filter bubble effect [19], since it introduces mechanisms that add a sort of “plausible creativity” and a “reasonable serendipity” in content discovery by users. In future research, we aim at extending our work in several directions. On the one hand, we aim at studying the application of optimization techniques in [1] in order A Logic-Based Tool for Dynamic Generation and Classification of Musical Content improve the efficiency of CoCoS and, as a consequence, of the proposed knowledge generation system. On the other hand, we aim at conducting a large scale experiment to further validate the effectiveness of the proposed approach, including people with sensory impairments, with the objective of promoting empathy, cohesion and inclusion across social groups, partially neglected by state-of-the-art recommender systems. References 1. Alberti, M., Bellodi, E., Cota, G., Riguzzi, F., Zese, R.: cplint on SWISH: probabilistic logical inference with a web browser. Intelligenza Artificiale 11(1), 47–64 (2017). https:// doi. org/10.3233/IA-170106 2. Boden, M.A.: Creativity and artificial intelligence. Artif. Intell. 103(1–2), 347–356 (1998) 3. Chiodino, E., Di Luccio, D., Lieto, A., Messina, A., Pozzato, G.L., Rubinetti, D.: A knowledge-based system for the dynamic generation and classification of novel contents in multimedia broadcasting. In: De Giacomo, G., et al., (eds.) ECAI 2020–24th European Conference on Artificial Intelligence, 29 August - 8 September 2020, Santiago de Compostela, Spain, 29 August - 8 September 2020. Frontiers in Artificial Intelligence and Applications, vol. 325, pp. 680–687. IOS Press (2020). https://doi.org/10.3233/FAIA200154 4. Frixione, M., Lieto, A.: Representing and reasoning on typicality in formal ontologies. In: Ghidini, C., Ngomo, A.N., Lindstaedt, S.N., Pellegrini, T. (eds.) Proceedings of the 7th International Conference on Semantic Systems, pp. 119–125. ACM International Conference Proceeding Series, ACM (2011). https://doi.org/10.1145/ 2063518.2063534 5. Giordano, L., Gliozzi, V., Olivetti, N., Pozzato, G.L.: Semantic characterization of rational closure: from propositional logic to description logics. Artif. Intell. 226, 1–33 (2015). https:// doi.org/10.1016/j.artint.2015.05.001 6. Hampton, J.A.: Inheritance of attributes in natural concept conjunctions. Memory Cognition 15(1), 55–71 (1987) 7. Lehmann, D., Magidor, M.: What does a conditional knowledge base entail? Artif. Intell. 55(1), 1–60 (1992). https://doi.org/10.1016/0004-3702(92)90041-U 8. Lieto, A., Perrone, F., Pozzato, G.L., Chiodino, E.: Beyond subgoaling: a dynamic knowledge generation framework for creative problem solving in cognitive architectures. Cogn. Syst. Res. 58, 305–316 (2019). https://doi.org/10.1016/j.cogsys.2019.08.005 9. Lieto, A., Pozzato, G.L.: A description logic of typicality for conceptual combination. In: Ceci, M., Japkowicz, N., Liu, J., Papadopoulos, G.A., Ra´s, Z.W. (eds.) ISMIS 2018. LNCS (LNAI), vol. 11177, pp. 189–199. Springer, Cham (2018). https://doi.org/10.1007/978-3030-01851-1_19 10. Lieto, A., Pozzato, G.L.: A description logic framework for commonsense conceptual combination integrating typicality, probabilities and cognitive heuristics. J. Exp. Theor. Artif. Intell. 32(5), 769–804 (2020). https://doi.org/10.1080/0952813X.2019.1672799 11. Lieto, A., Pozzato, G.L., Perrone, F.: A dynamic knowledge generation system for cognitive agents. In: 31st IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2019, Portland, OR, USA, 4–6 November 2019, pp. 676–681. IEEE (2019). https://doi.org/ 10.1109/ICTAI.2019.00099 12. Lieto, A., Pozzato, G.L., Perrone, F., Chiodino, E.: Knowledge capturing via conceptual reframing: a goal-oriented framework for knowledge invention. In: Proceedings of the 10th ACM Conference on Knowledge Capture, K-CAP 2019, Marina del Rey, pp. 109–114. ACM (2019) A. Lieto et al. 13. Lieto, A., Pozzato, G.L., Striani, M., Zoia, S., Damiano, R.: Degari 2.0: a diversity-seeking, explainable, and affective art recommender for social inclusion. Cognitive Syst. Res. 77, 1–17 (2023). https://doi.org/10.1016/j.cogsys.2022.10.001, https://www.sciencedirect.com/ science/article/pii/S1389041722000456 14. Lieto, A., Pozzato, G.L., Valese, A.: COCOS: a typicality based concept combination system. In: Felli, P., Montali, M. (eds.) Proceedings of the 33rd Italian Conference on Computational Logic, Bolzano, Italy, 20–22 September 2018. CEUR Workshop Proceedings, vol. 2214, pp. 55–59. CEUR-WS.org (2018). https://ceur-ws.org/Vol-2214/paper6.pdf 15. Lieto, A., Pozzato, G.L., Zoia, S., Patti, V., Damiano, R.: A commonsense reasoning framework for explanatory emotion attribution, generation and re-classification. Knowl.-Based Syst. 227, 107166 (2021) 16. Newell, A., Shaw, J.C., Simon, H.A.: Report on a general problem solving program. In: IFIP Congress, vol. 256, p. 64. Pittsburgh, PA (1959) 17. Newell, A., Simon, H.A.: Human Problem Solving, vol. 104, n. 9. Prentice-Hall, Englewood Cliffs (1972) 18. Osherson, D.N., Smith, E.E.: On the adequacy of prototype theory as a theory of concepts. Cognition 9(1), 35–58 (1981) 19. Parisier, E.: The Filter Bubble: What the Internet Is Hiding from You (2012) 20. Riguzzi, F., Bellodi, E., Lamma, E., Zese, R.: Probabilistic description logics under the distribution semantics. Semant. Web 6(5), 477–501 (2015). https://doi.org/10.3233/SW-140154 21. Riguzzi, F., Bellodi, E., Lamma, E., Zese, R.: Reasoning with probabilistic ontologies. In: Yang, Q., Wooldridge, M. (eds.) Proceedings of IJCAI 2015, pp. 4310–4316. AAAI Press (2015). https://ijcai.org/proceedings/2015 22. Shani, G., Gunawardana, A.: Evaluating recommendation systems. In: Ricci, F., Rokach, L., Shapira, B., Kantor, P.B. (eds.) Recommender Systems Handbook, pp. 257–297. Springer, Boston, MA (2011). https://doi.org/10.1007/ 978-0-387-85820-3_8 23. Sohail, S.S., Siddiqui, J., Ali, R.: Classifications of recommender systems: a review. Eng. Sci. Technol. Rev. 10(4), 132–153 (2017) Why Can Neural Networks Recognize Us by Our Finger Movements? Elena Mariolina Galdi1(B) , Marco Alberti2 , Alessandro D’Ausilio3 , and Alice Tomassini4 1 Dipartimento di Ingegneria, Universit´ a di Ferrara, Ferrara, Italy [emailprotected] 2 Dipartimento di Matematica e Informatica, Universit´ a di Ferrara, Ferrara, Italy [emailprotected] Dipartimento di Neuroscienze e Riabilitazione, Universit´ a di Ferrara, Ferrara, Italy [emailprotected] 4 Istituto Italiano di Tecnologia, Ferrara, Italy [emailprotected] Abstract. Neurobehavioral evidence suggests that human movement may be characterized by relatively stable individual differences (i.e. individual motor signatures or IMS). While most research has focused on the macroscopic level, all attempts to extract IMS have overlooked the fact that functionally relevant discontinuities are clearly visible when zooming into the microstructure of movements. These recurrent (2–3 Hz) speed breaks (sub-movements) reflect an intermittent motor control policy that might provide a far more robust way to identify IMSs. In this study, we show that individuals can be recognized from motion capture data using a neural network. In particular, we trained a classifier (a convolutional neural network) on a data set composed of time series recording the positions of index finger movements of 60 individuals; in tests, the neural network achieves an accuracy of 80%. We also investigated how different pre-processing techniques affect the accuracy in order to assess which motion features more strongly characterize each individual and, in particular, whether the presence of submovements in the data can improve the classifier’s performance. Keywords: Explainable AI · Convolutional neural networks capture · Movement analysis · Individual motor signature · Motion The possibility of recognizing an individual on the basis of his/her movements or gestures has been studied in depth in the past years due to its significant applications in the security and medical areas. Many researches focused on whole-body This work was partly supported by the University of Ferrara FIRD 2022 project “Analisi di serie temporali da motion capture con tecniche di machine learning”. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 327–341, 2023. https://doi.org/10.1007/ E. M. Galdi et al. movements such as gait [22,23] and most of these analyzed two dimensional input as images or videos. Gohar [14] proposed the use of Inertial Measurement Units or IMU to identify individuals based on gait. The reason for this different approach, namely, a one-dimensional time series instead of two-dimensional image analysis, lies in the fact that “image-based gait analysis often fails to extract quality measurements of an individual’s motion patterns owing to problems related to variations in viewpoint, illumination (daylight), clothing, worn accessories, etc.” [14]. The latter study showed that individuals could be identified based on an analysis of their gait with considerable accuracy (approximately 75%). However, whole-body data may not be available for person identification in many applications. In addition, very little research has been devoted to investigating individual motor signatures during distal hand movement. The objective of our research is to investigate whether it is possible to identify a subject from the simplest possible movement (i.e., index finger extension and flexion) using a convolutional neural network (CNN). Although CNNs were initially proposed to classify images [17–19,32], the choice of a CNN for multiclass classification, even with time series tasks, has been shown to be effective [8,10,34]. The data we used derive from a recent neuroscience project on interpersonal behavioral coordination across multiple temporal scales [30]. This study was aimed at investigating whether the recurrent discontinuities that characterize the microstructure of human movement composition (tiny recorrective speedbumps in the range of 2–3 Hz which are often called sub-movements), are finely co-regulated when participants are required to synchronize at the macroscopic scale only. The experimental settings and their speed profile are shown in Fig. 1. The goal of the present work is very different and we thus adopt a radically different analytical approach. In fact, we here investigated whether these microscopic movement characteristics can be used for the identification of individual movement fingerprints. In our research, we first wanted to determine whether finger movements contain sufficient information to allow the neural network to recognize the individual who generated the movements. In addition, we intend to carry out an in-depth post-hoc interpretation [24] of our results to understand the movement characteristics that are more relevant for identification. This latter goal is fully in line with the current interest in explainable artificial intelligence (XAI) [1,26]. The reasons that led to the emergence of this research field have to be found in the necessity of providing an explanation before making any decision informed by an aseptic algorithm. In medical applications, a reasonable explanation is sometimes more important than a correct diagnosis [2,11,15]. The same applies to the security domain [9], and considering the implications of recognizing the identity of an individual from minimal bodily movements, it is self-evident how important explainability should be. The European GDPR makes this point very clear, considering that there are more than one article (13–15, 22) that focus on the importance of correctly motivating the decisions and forecasts made by any automated decision-making process [28]. XAI in machine learning is a well known problem [4,5,7,13,31], which has received significant attention also in the field of Deep Learning [3,20,24]. Why Can Neural Networks Recognize Us by Our Finger Movements? Samek [27] provided an extensive description of this field and the tools developed for it. Simic [29] reviewed XAI methods for neural time-series classification, highlighting the class activation maps method or CAM [35], which was also used by Goodfellow [15]. Nevertheless, we explored a simpler path that, based on neurophysiological knowledge of the multiscale nature of human movements [30], grants easier and more straightforward interpretability. To investigate which movement features (i.e. temporal scales) are more relevant for the neural network, we decided to decompose the time series on the basis of their spectral content and we evaluated their impact of this and other key preprocessing choices on recognition accuracy. To the best of our knowledge, there is no evidence in the literature on a similar analytic approach to solving an analogous problem (i.e., person identification from minimal kinematic data). Instead, time-series analysis in the frequency domain is standard in speech technologies [12]. In particular, we show which frequencies produce the largest impact on the ability of a CNN to recognize individuals from the movement of their finger. The remainder of this paper is organized as follows. The experimental settings are described in Sect. 2. We show the most significant experimental results in Sect. 3. We conclude the paper (Sect. 4) with a discussion of the results and possible directions for future research. Experimental Settings In this section, after a brief introduction to the dataset we worked on (Sect. 2.1), we explain our application’s architecture (Sect. 2.2). Then, we will deepen into two main parts: first, we list the preprocessing techniques we chose (Sect. 2.3) as series segmentation, series filtering, and series normalization; in the second part, the neural network model (Sect. 2.4) is described. 2.1 The dataset we have been working on comes from previous research; all experimental instrumentation is described in depth in [30]. In total, 60 participants, forming 30 couples, performed a movement synchronization task. As shown in Fig. 1, participants were asked to keep their right index fingers pointing toward each other (without touching) and perform rhythmic flexion-extension movements around the metacarpophalangeal joint as synchronously as possible, either in-phase (toward the same direction) or anti-phase (toward opposite directions). Participants were instructed to maintain a slow movement pace (full movement cycle: single flexion/extension movements) by having them practice in a preliminary phase with a reference metronome set at 0.25 Hz. Each participant also performed the same finger movements alone (solo condition) with the only requirement of complying with the instructed pace (0.25 Hz). Finger movements were recorded using retroreflective markers tracked by a 3D real-time motion capture system (Vicon), providing continuous kinematic data sampled at 300 E. M. Galdi et al. Fig. 1. On the left, the experimental setup for data collection. From top to bottom, there are three settings for the solo, in-phase, and anti-phase tasks. The right panel shows the speed profile for the three different cases. Figure granted by the research of Tomassini et al. [30] Hz. Each trial had a duration of 2,5 min for a total of 45000 points for each time series. In addition, each of the three different experiments (solo, dyadic in-phase, dyadic antiphase) was repeated twice. This means that for each subject, we had six series, each made of 45000 points. As the first approximation, in this work, we considered that the finger movement was essentially only in one dimension along the x-axis. To augment our dataset, we segmented each series. As we will describe later in Sect. 2.3, we decided to test two different types of cutting: cutting at the maximum of the position data in order to have a complete movement (index extension and flexion), and cutting sub-series with fixed length with a specified gap smaller than the subseries dimension. Considering natural movement variability, the first choice further requires a resizing of the subseries because the convolutional network needs all inputs with the same dimension. The second method was used by Yang [33] to segment the time-series signal into a collection of shorter pieces. To investigate which movement features (i.e., temporal scales) are more relevant for the neural network, we decomposed the time series based on their spectral content. In particular, we applied different types of filters and studied their influence on the output accuracy of the CNN. Why Can Neural Networks Recognize Us by Our Finger Movements? Two different type of filtering operations have been investigated: moving average window and band-pass frequency filter. The two techniques are described in Sect. 2.3. Another important variable is related to the choice of applying or not a normalization to our signal, see Sect. 2.3. Finally, we decided to investigate if differentiating the signal would provide different information. Thus, we investigated the accuracy of our neural networks with different types of inputs, namely, position, speed, and acceleration data. 2.2 Application Architecture We decided to develop our AI program with Python. The software was essentially split in two main components: a first module assigned to the preprocessing, a second module for the neural network model. The first module as described in Sect. 2.1, has a composite structure with different possible choices to preprocess the data and different parameters to set. As shown in Fig. 2, it is possible to independently choose the series type, filter method, type of segmentation, and whether the series must be normalized. Depending on the choice made, it is necessary to specify different input parameters. Table 1 lists the different parameters required for each pre-processing choice. Once the data preprocessing has been completed, the segmented series are sent Fig. 2. Modular structure: Series Type = Speed, Position, Acceleration; Filter Methods = MAW (Moving Average Window), Band Pass Frequency Filter; Cut Choice = Complete movement (extension + flexion), Fixed Dimension Windows sliding with known gap Table 1. Parameters needed in function of different choices Choice Window dimension Band pass Low and high frequency cut Extension-flexion Resized subseries dimension Sliding window Subseries dimesion and gap E. M. Galdi et al. to the neural network. The TensorFlow Keras library was used to generate the neural network model. A Convolutional Neural Network (CNN) as been built for multiclass classification. It’s known that CNN is the most suitable architecture for classifying data among more classes. Usually, it is applied to 2D input data (i.e., images) [16,17] but it also shows good results for 1D input (i.e., time-series data) [14]. 2.3 Preprocessing Techniques Series Cut. We considered two different methods to cut the series. As a first approach, we cut the time series corresponding to the maximum finger positions on the x-axis. In this way, each sub-series represents the complete movement, extension, and flexion of the index finger (see Fig. 3. This type of cutting is functionally defined, and each subseries contains information about the entire movement. However, this type of cut creates a subset of different lengths that cannot be directly used as an input to a CNN. This means that we had to resize the subseries using the TimeSeriesResampler component from the Python library tslearn.preprocessing. Figure 3 shows the result of resizing on different subseries. Fig. 3. On the left: the blue line represents the position in function of time while the orange line is the derived speed; with the red spot, we highlighted where the cut was operated, which corresponds to the maximum finger positions. On the right: the effect of resizing after cutting on functionally defined kinematic landmarks; the blue line is the original time-series and the orange one is the resized (Color figure online) The second option to cut the time series is to decide a priori the dimension of the subseries we want to obtain and the gap between the following two subseries, as shown in Fig. 4. We applied this method to investigate whether there was any hidden information in the time series that was not locked to the entire movement cycle (extension-flexion). However, we have identified two main issues with this method. First, the dataset increases exponentially with a significant increase in program execution times. The second point was that, whereas in the previous Why Can Neural Networks Recognize Us by Our Finger Movements? case, we had the whole set of sub-series and we could randomly choose the data for training and testing, in this case, to avoid overlapping of the data, we had to cut the main series into two parts: the first 75% for the training set and the last 25% for testing. Consequently, we cannot exclude the possibility that the data organized in such a way is not biased in some ways. Fig. 4. Time series example with highlighted the sliding windows and gaps to shows how the second segmentation strategy was done. Series Filtering. We also investigated the influence of the filtering time series. Therefore, we applied two different types of filters to our data. Moving Average Window (MAW), is a very basic tool commonly used in time series to smooth signals. For each point of the set, a fixed number of subsequent points was averaged, and the result replaced the starting point. Obviously, we obtain different signals depending on the dimension of the window in which we calculate the average (see Fig. 5); the larger it is, the smoother the signal will be. Fig. 5. Result of MAW with different windows dimension. We also analyzed the effects of different frequency filters on the accuracy of the CNN. Essentially, we created a band-pass filter where we could set low and high frequencies. We created a Butterworth filter using the predefined function E. M. Galdi et al. in Python’s scipy. signal library, with the order set at 4. Thus, it is possible to set low and high frequencies. If the low-frequency cut was set to 0, it was applied as a low-pass filter (Fig. 6). Fig. 6. Effect of different types of filters on the raw speed’s time series. Series Normalization. As a final step, we explored the effect of normalization on our dataset in terms of accuracy gains. In general, we know that when the data have a comparable scale, as in our case where the movement and its rhythm are predefined, the normalization does not improve the accuracy because it can destroy important information hidden in the dataset. Nevertheless, we performed experiments with and without normalization. To set the range of values within which we calculated the maximum and minimum of the normalization, we considered the entire series. 2.4 Neural Network Architecture We decided to apply a CNN, as suggested in the literature, for multi-class classification of time series data [8,10,34]. The structure of the neural network is described in the Table 2. We used RMSprop as the optimizer and performed early stopping to avoid overfitting. We compared the results obtained with this neural network with a similar network made with a pytorch instead of tensorflow.keras. The results of these two networks are very similar. Why Can Neural Networks Recognize Us by Our Finger Movements? Table 2. CDNN model. 1D convolution Kernel: 3@64 ActFunct: ReLU 1D convolution Kernel: 3@64 ActFunct: ReLU 1D max pooling Pool size: 2 1D convolution Kernel: 3@64 ActFunct: ReLU 1D convolution Kernel: 3@64 ActFunct: ReLU 1D max Pooling Pool size: 2 Dense Dense Softmax To evaluate the impact of different parameters on recognition performance, we examined the accuracy of our CNN. First, we show how accuracy is affected by the choice of the type of time series (see Table 3 below). Table 3. The accuracy for the different series types was calculated using a low-pass filter set at 50 Hz and cut based on the maximum finger position. Series’s type w Norm w/o Norm Position Speed Acceleration It’s evident the improvement of the accuracy for the acceleration and position once the normalization is applied. Nevertheless, the accuracy of the speed data was still the highest; therefore, it was used as a reference for the following experiments. The experiment comparing the two types of series segmentation doesn’t show significant differences in terms of accuracy, while the computational time was drastically longer when the data was cut with sliding windows. This convinced us to choose the cut at the maximum finger position as the standard segmentation method for our series. E. M. Galdi et al. MAW, as shown in Fig. 7, has a maximum performance at 0 or when no moving average has actually been applied. By increasing the dimension of its window, its accuracy decreased until it reached 30% with a 100 points window. Remember that when the MAW windows include 100 points, only the main shape of the movement is visible (see Fig. 5). Fig. 7. The figure shows how the accuracy changed as a function of the number of points included in the moving average. Figure 8 report how the accuracy changes with different Frequency Filters. As we can clearly see the fundamental frequency (i.e., 0.25 Hz or the instructed finger flexion-extension rhythm) is the most meaningful frequency and if we remove it from our signal no recognition can be done. The fact that each individual is characterized by their own preferred (self-paced) tapping tempo is well known in neurophysiology literature [25]. In addition, our experiments show that it is clear that this frequency alone is not sufficient, and the accuracy increases as we add more frequencies. Interestingly, in neurophysiology, it is well known [6] that in movement, albeit with the differences that may exist between moving a finger or leg, frequencies above 15 Hz begin to be attenuated. From to 20–30 onwards, there is no physiological relevance anymore. Instead, in our experiments, we still see further increases when adding frequencies above 30 Hz, which means that the network is still learning something. Future in-depth analysis will have to investigate what our neural network is learning in the range of 30 Hz to 70 Hz. More importantly, at 20 Hz, the accuracy is already 65%, which is a very good performance for classification among the 60 classes. If look at Fig. 9 we can see the results obtained by applying a frequency domain filter and a simple MAW. We can see that we have similar results with a 1 Hz band low-pass filter and an MAW with a window of 100 points (approximately 30% accuracy). Instead, if we did not apply any MAW or filter, or we used a low-pass filter with a band higher than 60 Hz, we found a maximum accuracy of approximately 75%. Because one of our initial goals was to investigate the role of sub-movements in defining individual motor signatures, we focused our attention Why Can Neural Networks Recognize Us by Our Finger Movements? Fig. 8. In the figure, it is reported how the accuracy changes for different band pass filters: each curve corresponds to a filter with a specific low-frequency cut (0 Hz, 2 Hz, 4 Hz, 6 Hz, 8 Hz), while in the x-axis, the high-frequency cut is reported. on the 2–4 Hz frequencies in Fig. 10. Although we did not notice any significant variation in the accuracy for the bandpass filter with a low cut at 2 Hz and a high cut at 4 Hz, we could clearly observe a significant slope increase for the low-pass filter around these frequencies. As explained earlier, the fundamental frequency (0.25 Hz) probably contained most of the information (less than 30% accuracy). However, the performance is far from its plateau; rather, the model largely improves by adding the sub-movement range. Future research should further investigate this aspect. The proposed work demonstrates that it is possible to recognize subjects by starting from their index finger movements. Interestingly, we achieved the same accuracy obtained by Gohar [14] even if he worked with human gait instead of finger movement. In addition we found that the fundamental frequency is undoubtedly the pivotal aspect in the recognition of subjects, but not alone. Higher frequencies contribute significantly to an increase in accuracy, but only in the presence of fundamental frequency. For instance, we have not yet explained the gap we have from the 30% accuracy obtained with the fundamental frequency only, and 75% obtained with a low-pass filter with a bandwidth of 60–70 Hz (Fig. 11). At this point of the research, it is not yet fully demonstrated whether submovements play a central role in the recognition process, but we have some clues. The slope of the accuracy curve may provide some hints for future investigations. It is possible to design more targeted experiments to investigate a range of frequencies with greater granularity. However, this work had to first E. M. Galdi et al. Fig. 9. Comparison between results obtained by applying the frequency-domain filter and MAW. Fig. 10. The figure shows the accuracy of the band-pass filter with different lowfrequency cut (0 Hz, 2 Hz, 4 Hz, 6 Hz, 8 Hz). Fig. 11. The figure shows that the accuracy increases as the frequencies change. In particular, the slope in the accuracy from 0 to 4 Hz is shown in red, and that from 6 to 20 Hz is shown in blue. (Color figure online) test several design choices, such as data type, segmentation, normalization, or filtering strategy, which, as we demonstrated here, have a dramatic impact on model performance. After all these tests, we can say that our approach, with few Why Can Neural Networks Recognize Us by Our Finger Movements? key design choices, has an interesting potential in recognizing individual motor signatures. In our future work, we will build from this work to better investigate the role of the different time scales of movement composition to differentiate and explore the interaction between macro- and microscopic movement features in defining individual movement fingerprints. Moreover, we do not exclude the possibility of answering this question by applying more structured tools such as class activation maps [35], or a clustering method based on PLif, as suggested by Li [21] in order to determine which other signal characteristics affect CNN classification the most. References 1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018). https://doi.org/ 10.1109/ ACCESS.2018.2870052 2. Ahmad, M.A., Eckert, C., Teredesai, A.: Interpretable machine learning in healthcare. In: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, BCB 2018, pp. 559–560. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/ 3233547.3233667 3. Assaf, R., Schumann, A.: Explainable deep neural networks for multivariate time series predictions. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pp. 6488–6490. International Joint Conferences on Artificial Intelligence Organization, Macao (2019). https://doi.org/10.24963/ ijcai.2019/932 4. Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K.: How to explain individual classification decisions, p. 29 (2010) 5. Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021). https://doi.org/10.1613/ jair.1. 12228 6. Burke, R.E.: Motor units: anatomy, physiology, and functional organization, pp. 345–422. Wiley (2011). https://doi.org/10.1002/cphy.cp010210, https:// onlinelibrary.wiley.com/doi/abs /10.1002/cphy.cp010210 7. Burrell, J.: How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. 3(1), 205395171562251 (2016). https://doi.org/10.1177/ 2053951715622512 8. Cui, Z., Chen, W., Chen, Y.: Multi-scale convolutional neural networks for time series classification (2016) 9. Ernst, C.: Artificial intelligence and autonomy: self-determination in the age of automated systems. In: Wischmeyer, T., Rademacher, T. (eds.) Regulating Artificial Intelligence, pp. 53–73. Springer, Cham (2020). https://doi.org/10.1007/9783-030-32361-5 3 10. Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., Muller, P.-A.: Deep learning for time series classification: a review. Data Min. Knowl. Disc. 33(4), 917–963 (2019). https://doi.org/10.1007/ s10618-019-00619-1 11. Foster, K.R., Koprowski, R., Skufca, J.D.: Machine learning, medical diagnosis, and biomedical engineering research - commentary. Biomed. Eng. Online 13(1), 94 (2014). https:// E. M. Galdi et al. 12. Gee, A.H., Garcia-Olano, D., Ghosh, J., Paydarfar, D.: Explaining deep classification of time-series data with learned prototypes, p. 8 (2019) 13. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018). https://doi.org/10.1109/DSAA.2018.00018 14. Gohar, I., et al.: Person re-identification using deep modeling of temporally correlated inertial motion patterns. Sensors 20(3), 949 (2020). https://doi.org/10.3390/ s20030949 15. Goodfellow, S.D., Goodwin, A., Greer, R., Laussen, P.C., Mazwi, M., Eytan, D.: Towards understanding ECG rhythm classification using convolutional neural networks and attention mappings, p. 18 (2018) 16. Heenaye-Mamode Khan, M., et al.: Multi- class classification of breast cancer abnormalities using deep convolutional neural network (CNN). PLOS One 16 (8), 1–15 (2021). https://doi.org/10.1371/journal.pone.0256500 17. Hu, Y., Sokolova, M.: Convolutional neural networks in multi-class classification of medical data, p. 13 (2020) 18. Kim, Y.: Convolutional neural networks for sentence classification (2014) 19. LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989). https:// doi.org/10.1162/neco.1989.1.4.541 ¨ 20. Leventi-Peetz, A.M., Ostreich, T.: Deep learning reproducibility and explainable AI (XAI) (2022) 21. Li, L., Prakash, B.A., Faloutsos, C.: Parsimonious linear fingerprinting for time series. Proc. VLDB Endow. 3(1–2), 385–396 (2010). https://doi.org/10.14778/ 1920841.1920893 22. Little, J.J., Boyd, J.E.: Recognizing people by their gait: the shape of motion, p. 33 (1998) 23. Park, G., Lee, K.M., Koo, S.: Uniqueness of gait kinematics in a cohort study. Sci. Rep. 11(1), 15248 (2021). https://doi.org/10.1038/s41598-021-94815-z 24. Preece, A.: Asking ‘Why’ in AI: explainability of intelligent systems – perspectives and challenges. Intell. Syst. Account. Financ. Manage. 25(2), 63–72 (2018). https://doi.org/10.1002/isaf.1422 25. Repp, B.H., Su, Y.-H.: Sensorimotor synchronization: a review of recent research (2006–2012). Psychon. Bull. Rev. 20(3), 403–452 (2013). https://doi.org/10.3758/ s13423-012-0371-2 26. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier (2016) 27. Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., M¨ uller, K.R.: Explaining deep neural networks and beyond: a review of methods and applications. Proc. IEEE 109(3), 247–278 (2021). https://doi.org/10.1109/JPROC.2021.3060483 28. Selbst, A.D., Powles, J.: Meaningful information and the right to explanation. Int. Data Priv. Law 7(4), 233–242 (2017). https://doi.org/10.1093/idpl/ipx022 ˇ c, I., Sabol, V., Veas, E.: XAI methods for neural time series classification: a 29. Simi´ brief review (2021) 30. Tomassini, A., et al.: Interpersonal synchronization of movement intermittency. iScience 25(4), 104096 (2022). https://doi.org/10.1016/j.isci.2022.104096 31. Vale, D., El-Sharif, A., Ali, M.: Explainable artificial intelligence (XAI) post-hoc explainability methods: risks and limitations in non-discrimination law. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00142-y Why Can Neural Networks Recognize Us by Our Finger Movements? 32. Woan Ching, S.L., et al.: Multiclass convolution neural network for classification of COVID-19 CT images. Comput. Intell. Neurosci. 2022, 1–15 (2022). https:// doi.org/10.1155/2022/9167707 33. Yang, J.B., Nguyen, M.N., San, P.P., Li, X.L., Krishnaswamy, S.: Deep convolutional neural networks on multichannel time series for human activity recognition, p. 7 (2015) 34. Zheng, Y., Liu, Q., Chen, E., Ge, Y., Zhao, J.L.: Time series classification using multi-channels deep convolutional neural networks. In: Li, F., Li, G., Hwang, S., Yao, B., Zhang, Z. (eds.) WAIM 2014. LNCS, vol. 8485, pp. 298–310. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08010-9 33 35. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921–2929 (2016). https://doi.org/10.1109/ CVPR.2016.319 Labelled Sequent Calculi for Conditional Logics: Conditional Excluded Middle and Conditional Modus Ponens Finally Together Nicola Olivetti1 , Nikola Panic2 , and Gian Luca Pozzato2(B) 1 Aix Marseille Université, CNRS, ENSAM, Université de Toulon, LSIS UMR 7296, Marseille, France [emailprotected] 2 Dipartimento di Informatica, Università di Torino, Turin, Italy [emailprotected], [emailprotected] Abstract. We introduce labelled sequent calculi for Conditional Logics with a selection function semantics. Conditional Logics are a sort of generalization of multimodal logics where modalities are labelled by formulas of the same language. Recently, they received a renewed attention and have found several applications in knowledge representation and artificial intelligence. In a previous work, we have considered the basic system CK and extensions with well known conditions ID, MP, CS and CEM, with the exception of those admitting both conditions CEM and MP, obtaining labelled sequent calculi called SeqS. Here we provide calculi for the whole cube of the extensions of CK generated by the above axioms, including also those with both CEM and MP: the basic idea is that of replacing the rule dealing with CEM in SeqS, which performs a label substitutions in both its premises, by a new one that avoids such a substitution and adopts a conditional formula on the right-hand side of a sequent as its principal formula. We have also implemented the proposed calculi in Prolog following the “lean” methodology, then we have tested the performances of the new prover, called CondLean2022, and compared them with those of CondLean, an implementation of SeqS, on the common systems. The performances of CondLean2022 are promising and seem to be better than those of CondLean, witnessing that the proposed calculi also provide a more efficient theorem prover for Conditional Logics. 1 Introduction Conditional Logics have a long history, starting with the seminal works by [5, 17, 18, 24], and [4] in the seventies. Recently, Conditional Logics have found a renewed interest in several fields of artificial intelligence and knowledge representation, from hypothetical reasoning to belief revision, from diagnosis to nonmonotonic reasoning and planning [6, 8–16, 23]. Conditional Logics are extensions of classical logic by a binary operator ⇒, called conditional operator, used in order to express conditional formulas of the form A ⇒ B. Similarly to modal logics, the semantics of Conditional Logics can be defined in terms of possible world structures. In this respect, Conditional Logics can be seen as a generalization of modal logics (or a type of multi-modal logic) where the conditional operator is a sort of modality indexed by a formula of the same language. However, c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 345–357, 2023. https://doi.org/10.1007/978-3-031-27181-6_24 N. Olivetti et al. as a difference with modal logics, the lack of a universally accepted semantics led to a partial underdevelopment of proof methods and theorem provers for these logics. An effort in the direction of filling this gap is provided in [19]. The semantics considered in this work is the selection function semantics introduced by Nute in [18], where truth values are assigned to formulas depending on a world. Intuitively, the selection function f selects, for a world w and a formula A, the set of worlds f (w, A) which are “most-similar to w” given the information A. In normal conditional logics, f depends on the set of worlds satisfying A rather than on A itself, so that f (w, A) = f (w, A ) whenever A and A are true in the same worlds. A conditional formula A ⇒ B is true in w whenever B is true in every world selected by f for A and w. With the selection function semantics at hand, CK is the fundamental system and it has the same role as the system K in modal logic. Formulas valid in CK are exactly those ones that are valid in every selection function model. Extensions are then obtained by imposing restrictions on the selection function. In [19], a labelled sequent calculus for CK and some standard extensions with conditions ID (conditional identity), MP (conditional modus ponens), CEM (conditional third excluded middle), and CS (conditional strong centering) are considered, as well as most of the combinations of them. The proposed calculi, called SeqS, are modular and, in some cases, optimal. The authors also introduce CondLean, a theorem prover implementing the calculi SeqS in Prolog. In [19], however, all the systems including both the axioms CEM and MP are neglected: the reason is that the proof of cut elimination, needed in order to prove the completeness of the calculi, does not work when such axioms are considered together. In this paper we provide labelled sequent calculi, that we call SeqS22, for the whole cube of the extensions of CK generated by the above mentioned axioms, including those with both CEM and MP, filling the existing gap. The basic idea is that of replacing the rule dealing with CEM in SeqS, which performs a label substitution in both its premises, by a new one that avoids such a substitution and adopts a conditional formula on the right-hand side of a sequent as its principal formula. We show that one can derive a decision procedure from the cut-free calculi, providing a constructive proof of decidability of the logics considered. By estimating the size of the finite derivations of a given sequent, we also obtain a polynomial space complexity bound for these logics. Furthermore, we sketch an implementation of the proposed calculi SeqS22: the program, called CondLean2022, is implemented in Prolog and it is inspired by the “lean” methodology, whose aim is to write short programs and exploit the power of Prolog’s engine as much as possible: in this respect, every clause of a single predicate, called prove, implement an axiom or rule of the calculi and the proof search is provided for free by the mere depth-first search mechanism of Prolog, without any additional ad hoc mechanism. We have tested the performances of CondLean2022 and compared them with those of CondLean, obtaining encouraging results that allow us to conclude that the new rule for CEM, on the one hand, makes it possibile to conclude the proof of cut elimination also in systems with MP, on the other hand, avoiding label substitution leads to a significant improvement of the performance of the prover. 2 Conditional Logics with Selection Function Semantics In this section we briefly recall propositional Conditional Logics. A propositional conditional language L contains: (i) a set of propositional variables ATM ; (ii) the constants Labelled Sequent Calculi for Conditional Logics ⊥ and ; (iii) a set of connectives ¬ (unary), ∧, ∨, →, ⇒ (binary). Formulas of L include formulas of classical logic ¬A, A ∧ B, A ∨ B, A → B, to which we add conditional formulas of the form A ⇒ B. We define the selection function semantics as follows: given a non-empty set of possible worlds W, the selection function f selects, for a world w and a formula A, the set of worlds of W which are closer to w given the information A. A conditional formula A ⇒ B holds in a world w if the formula B holds in all the worlds selected by f for w and A. Definition 1 (Selection function semantics). A model is a triple M = W, f, [ ] where: i) W is a non-empty set of worlds; ii) f is the selection function f : W ×2W −→ 2W iii) [ ] is the evaluation function, which assigns to an atom P ∈ ATM the set of worlds where P is true, and is extended to the other formulas in the usual way for classical connectives, whereas for conditional formulas we have [A ⇒ B] = {w ∈ W | f (w, [A]) ⊆ [B]}. It is worth noticing that we have defined f taking [A] rather than A (i.e. f (w, [A]) rather than f (w, A)) as an argument; this is equivalent to define f on formulas, i.e. f (w, A) but imposing that if [A] = [A ] in the model, then f (w, A) = f (w, A ). This condition is called normality. The semantics above characterizes the basic conditional logic CK. An axiomatization of this system is given by: – any axiomatization of classical propositional calculus; A A→B – (Modus Ponens) B A↔B – (RCEA) (A ⇒ C) ↔ (B ⇒ C) (A1 ∧ · · · ∧ An ) → B – (RCK) (C ⇒ A1 ∧ · · · ∧ C ⇒ An ) → (C ⇒ B) As for modal logics, we can consider extensions of CK by assuming further properties on the selection function. We consider the following ones: Logic Axiom Model condition f (w, [A]) ⊆ [A] (A ∧ B) → (A ⇒ B) w ∈ [A] → f (w, [A]) ⊆ {w} CEM (A ⇒ B) ∨ (A ⇒ ¬B) | f (w, [A]) | ≤ 1 MP (A ⇒ B) → (A → B) w ∈ [A] → w ∈ f (w, [A]) The above axiomatizations are complete with respect to the respective semantics [18]. It is worth noticing that: Proposition 1. In systems with both axioms (CEM) and (MP), axiom (CS) is derivable. Proof. For (CEM) we have that | f (w, [A]) |≤ 1. For (MP), we have that, if w ∈ [A], then w ∈ f (w, [A]). Therefore, it follows that if w ∈ [A], then f (w, [A]) = {w}, satisfying the (CS) condition. N. Olivetti et al. 3 SeqS22: A Sequent Calculus for Conditional Logics In this section we introduce SeqS22, a family of labelled sequent calculi for the conditional systems under consideration. The calculi are modular and they are able to deal with the basic system CK as well as with the whole cube of extensions with axioms ID, CS, CEM and MP. Given Proposition 1, it is worth noticing that, concerning systems admitting the axiom (CS), the calculi SeqS22 offer two alternative calculi: on the one hand, the calculus obtained by adding the suitable rule (CS), as in SeqS [19], on the other hand, the calculus obtained by including the rules for (MP) and (CEM) and omitting the rule for (CS), thus avoiding the mechanism of label substitution required by such a rule. The calculi make use of labels to represent possible worlds. We consider a language L and a denumerable alphabet of labels A, whose elements are denoted by x, y, z, .... There are two kinds of labelled formulas: – world formulas, denoted by x: A, where x ∈ A and A ∈ L, used to represent that A holds in a world x; A – transition formulas, denoted by x −→ y, where x, y ∈ A and A ∈ L. A transition A formula x −→ y represents that y ∈ f (x, [A]). A sequent is a pair Γ, Δ , usually denoted with Γ Δ, where Γ and Δ are multisets of labelled formulas. The intuitive meaning of Γ Δ is: every model that satisfies all labelled formulas of Γ in the respective worlds (specified by the labels) satisfies at least one of the labelled formulas of Δ (in those worlds). Formally, given a model M = W, f, [ ] for L, and a label alphabet A, we consider any mapping I : A → W. Let F be a labelled formula, we define M |=I F as follows: – M |=I x: A if and only if I(x) ∈ [A] A – M |=I x −→ y if and only if I(y) ∈ f (I(x), [A]) We say that Γ Δ is valid in M if for every mapping I : A → W, if M |=I F for every F ∈ Γ , then M |=I G for some G ∈ Δ. We say that Γ Δ is valid in a system (CK or any extension of it) if it is valid in every M satisfying the specific conditions for that system. We say that a sequent Γ Δ is derivable if it admits a derivation in SeqS22, i.e. a proof tree, obtained by applying backwards the rules of the calculi, having Γ Δ as a root and whose leaves are all instances of (AX). As usual, the idea is as follows: in order to prove that a formula F is valid in a conditional logic, then one has to check whether the sequent x : F is derivable in SeqS22, i.e. if we can obtain a proof tree by applying backwards the rules, starting from the root x : F . As a difference with the sequent calculi SeqS introduced in [19], the calculi SeqS22 follows the basic idea of the calculus introduced in [22], which in this paper is extended in order to deal also with MP. Such an idea is that of dealing with the CEM condition by means of a second rule having a conditional A ⇒ B on the right-hand side of a sequent as a principal formula, rather than the one in SeqS, where the condition on the cardinality (at most 1) of the set of worlds selected by the selection function is captured A by means of a label substitution mechanism: roughly speaking, given x −→ y, in order Labelled Sequent Calculi for Conditional Logics to prove x −→ z, we replace both y and z with a new label u, following the observation that they represent the same world. The “old” rule in SeqS is as follows: A Γ, x −→ y Δ, x −→ z (Γ, x −→ y Δ)[y/u, z/u] A (CEM ) Γ, x −→ y Δ where Σ[x/u] is used to denote the multiset obtained from Σ by replacing, as mentioned here above, the label x by u wherever it occurs, and where it holds that y = z and u ∈ Γ, Δ. The novel rule introduced in SeqS22 is as follows: A Γ Δ, x : A ⇒ B, x −→ y Γ Δ, x : A ⇒ B, y : B Γ Δ, x : A ⇒ B Intuitively, given a conditional formula x : A ⇒ B on the right-hand side of a sequent, the calculi apply the rule (⇒ R) only one time, introducing a new label y representing the single world selected by the selection function, then the new rule (CEM) makes use of such a label y for all other conditional formulas of the form x : A ⇒ B . As an example, Fig. 2 shows a derivation of an instance of the characterizing axiom (CEM). The calculi SeqS22 are shown in Fig. 1. They satisfy basic structural properties, namely height-preserving admissibility of weakening, height-preserving invertibility of the rules (with the exception of (EQ)), height-preserving admissibility of contraction. These are needed in order to show that the following cut rule is admissible: Theorem 1. The cut rule: Γ Δ, F F, Γ Δ Γ Δ where F is any labelled formula, is admissible in SeqS22, i.e. if Γ Δ, F and F, Γ Δ are derivable, so is Γ Δ. Proof. As usual, the proof proceeds by a double induction over the complexity of the cut formula and the sum of the heights of the derivations of the two premises of cut, in the sense that we replace one cut by one or several cuts on formulas of smaller complexity, or on sequents derived by shorter derivations. We show two of the most interesting cases involving the novel rule (CEM). Let us first consider the case involving the rules (CEM) and (MP), those rules that caused the failure of the proof of admissibility of cut in [19]. We consider the case in which the cut formula is principal in the application of the (MP) rule only, as follows: A (1) x −→ x, Γ Δ, x : A ⇒ B, x −→ y A Γ Δ, x : A ⇒ B, x −→ x, x : A A (3) Γ Δ, x : A ⇒ B, x −→ x (2) x −→ x, Γ Δ, x : A ⇒ B, y : B A x −→ x, Γ Δ, x : A ⇒ B (4) Γ Δ, x : A ⇒ B N. Olivetti et al. Fig. 1. Rules of sequent calculi SeqS22 Fig. 2. A derivation of CEM in SeqS22. Since weakening is height-preserving admissible, since (3) is derivable, so are A A A (3 ) Γ Δ, x : A ⇒ B, x −→ x, x −→ y and (3 ) Γ Δ, x : A ⇒ B, x −→ x, y : B with derivations of no greater heights. We can then apply the inductive hypothesis on the height of the derivations to (3 ) and (1), obtaining a derivation of A (5) Γ Δ, x : A ⇒ B, x −→ y, as well as to (3 ) and (2), obtaining a derivation of (6) Γ Δ, x : A ⇒ B, y : B. We conclude that (4) can be derived by an application of (CEM) to (5) and (6). Labelled Sequent Calculi for Conditional Logics Let us now take into account the case in which the cut formula is the principal formulas in both the premises of (cut), and the rules applied to it are (CEM) and (⇒ L). The situation is as follows: A (7) Γ Δ, x : A ⇒ B, x −→ y (8) Γ Δ, x : A ⇒ B, y : B (11) Γ Δ, x : A ⇒ B (CEM) Γ Δ (9) Γ, x : A ⇒ B Δ, x −→ y (10) Γ, x : A ⇒ B, y : B Δ (12) Γ, x : A ⇒ B Δ (⇒ L) Since weakening is height-preserving admissible, we can obtain a proof (with a derivation of at most the same height of (11)) for (11 ) Γ Δ, x : A ⇒ B, y : B. By inductive hypothesis on the height of the derivations, we can cut (10) and (11 ), obtaining a derivation of (13) Γ, y : B Δ. Since weakening is height-preserving admissible, we can obtain a proof (with a derivation of at most the same height of (12)) for (12 ) Γ, x : A ⇒ B Δ, y : B. By inductive hypothesis on the height of the derivations, we can cut (8) and (12 ), obtaining a derivation of (14) Γ Δ, y : B. We can then apply the inductive hypothesis on the complexity of the cut formula to cut (13) and (14), and we are done with a derivation of Γ Δ. Due to space limitations, the other cases are omitted and left to the reader. Theorem 2 (Soundness and completeness). Given a conditional formula F , it is valid in a conditional logic if and only if it is derivable in the corresponding calculus of SeqS22, that is to say |= F if and only if x : F is derivable in SeqS22. Proof. (Soundness) We have to prove that, if a sequent Γ Δ is derivable, then the sequent is valid. This can be done by induction on the height of the derivation of Γ Δ. The basic cases are those corresponding to derivations of height 0, that is to say instances of (AX). It is easy to see that, in all these cases, Γ Δ is a valid sequent. As an example, consider Γ, x : P Δ, x : P : consider every model M and every mapping I satisfying all formulas in the left-hand side of the sequent, then also x : P . This means that I(x) ∈ [P ], but then we have that M satisfies via I at least a formula in the right-hand side of the sequent, the same x : P . For the inductive step, we proceed by considering each rule of the calculi SeqS22 in order to check that, if the premise(s) is (are) valid sequent(s), to which we can apply the inductive hypothesis, so is the conclusion. To save space, we only present the cases of (MP) and of the new rule (CEM), the other ones are left to the reader. Let us start with (MP) and a derivation ended as follows: A (1) Γ Δ, x −→ x, x : A (MP) A (2) Γ Δ, x −→ x By inductive hypothesis, the sequent (1) is valid. By absurd, suppose that (2) is not: this means that there exists a model M and a mapping I satisfying all formulas in Γ but falsifying all formulas in the right-hand side of the sequent, namely all formulas in Δ A and x −→ x. Since (1) is valid, every model with any mapping satisfying all formulas in Γ satisfies also at least a formula in the right-hand side of the sequent: since M falsifies A all formulas in Δ and (∗) x −→ x via I, it must be that M |=I x : A, that is to say the N. Olivetti et al. world w represented by I(x) is an A-world, i.e. w ∈ [A]. By the condition (MP), this implies that also w ∈ f (w, [A]), however this would mean that I(x) ∈ f (I(x), [A]), A i.e. M |=I x −→ x, against (∗). Let us now consider the rule (CEM) and a proof ended as: A (3) Γ Δ, x : A ⇒ B, x −→ y (4) Γ Δ, x : A ⇒ B, y : B (5) Γ Δ, x : A ⇒ B By inductive hypothesis, both (3) and (4) are valid. Again by absurd, suppose (5) is not, that is to say there exists a model M and a mapping I satisfying all formulas in Γ but falsifying all formulas in Δ as well as x : A ⇒ B. Since (3) is valid, since M and I falsify all formulas in Δ and x : A ⇒ B, necessarily we have that M |=I A x −→ y, that is to say I (y) ∈ f (I (x), [A]). By the (CEM) semantic condition, it follows that (∗∗) f (I (x), [A]) = {I(y)}. Analogously, by the validity of (4) we have that M |=I y : B. If M |=I x : A ⇒ B in (5), there exists a world w such that w ∈ f (I (x), [A]) and w ∈ [B], however, since (∗∗), we have that I (y) = w, against the validity of (4), and we are done. (Completeness) The completeness is an easy consequence of the admissibility of the cut rule (Theorem 1). We show that if a formula F is valid in a conditional logic, then x : F is derivable in SeqS22. We proceed by induction on the complexity of the formulas, therefore we show that the axioms are derivable and that the set of derivable formulas is closed under (Modus Ponens), (RCEA), and (RCK). A derivation of axioms (ID), (CS) and (MP) can be obtained as in SeqS [19]. A derivation of (CEM) is provided in Fig. 2. For (Modus Ponens), suppose that x : A → B and x : A are derivable. We easily have that x : A → B, x : A x : B is derivable too by applying (→ L). Since cut is admissible by Theorem 1, by two cuts we obtain x : B (weakenings are omitted to increase readability): x : A → B, x : A x : B x:A x:B For (RCEA), we have to show that if A ↔ B is derivable, then also (A ⇒ C) ↔ (B ⇒ C) is so. The formula A ↔ B is an abbreviation for (A → B) ∧ (B → A). Suppose that x : (A → B) ∧ (B → A) is derivable, then also x : A x : B and x : B x : A are derivable since rules are height-preserving invertible. We can derive x : A ⇒ C x : B ⇒ C as follows: x:Ax:B x:B x:A x −→ y, x : A ⇒ C x : B ⇒ C, y : C, x −→ y y : C, x −→ y, x : A ⇒ C x : B ⇒ C, y : C (⇒ L) x −→ y, x : A ⇒ C x : B ⇒ C, y : C (⇒ R) x:A⇒C x:B ⇒C The other half is symmetric. For (RCK), suppose that x : B1 ∧ B2 · · · ∧ Bn → C is derivable, by the height-preserving invertibility of the rules also y : B1 , . . . , y : Bn Labelled Sequent Calculi for Conditional Logics y : C is derivable, then so is (∗) x : A ⇒ B1 , x : A ⇒ B2 , . . . , x : A ⇒ Bn , y : B1 , . . . , y : Bn x : A ⇒ C, y : C by admissibility of weakening. We have: A x −→ y x −→ y (∗) x : A ⇒ B1 , . . . , y : B1 , . . . , y : Bn x : A ⇒ C, y : C (⇒ L) x −→ y, x : A ⇒ B1 , . . . , x : A ⇒ Bn , y : B1 , . . . , y : Bn−1 x : A ⇒ C, y : C . . . A x −→ y x −→ y x −→ y, x : A ⇒ B1 , . . . , x : A ⇒ Bn , y : B1 x : A ⇒ C, y : C (⇒ L) x −→ y, x : A ⇒ B1 , . . . , x : A ⇒ Bn x : A ⇒ C, y : C (⇒ R) x : A ⇒ B1 , . . . , x : A ⇒ Bn x : A ⇒ C The presence of labels and of the rules (⇒ L), (⇒ R), (ID), (MP), (CEM), and (CS), which increase the complexity of the sequent in a backward proof search, is a potential cause of a non-terminating proof search. However, with a similar argument to the one proposed in [19], we can define a procedure that can apply such rules in a controlled way and introducing a finite number of labels, ensuring termination. Intuitively, it can be shown that it is useless to apply (⇒ L) and (⇒ R) on x : A ⇒ B by A introducing (looking backward) the same transition formula x −→ y more than once in each branch of a proof tree. Similarly, it is useless to apply (ID), (MP), (CEM), and (CS) on the same transition formula more than once in a backward proof search in each branch of a derivation. This leads to the decidability of the given logics: Theorem 3 (Decidability). Conditional Logics CK and all its extensions with axioms ID, MP, CS, CEM and all their combinations are decidable. It can be shown that provability in all the Conditional Logics considered is decidable in O(n2 log n) space, we omit the proof which is essentially the same as in [19]. 4 A Theorem Prover for Conditional Logics with CEM In this section we briefly present CondLean22 (https://gitlab2.educ.di.unito.it/pozzato/ condlean4), a Prolog implementation of the calculi SeqS22 introduced in the previous section. The prover is in the line of the existing provers for that logics [20, 21] and it follows the “lean” methodology, introduced by Beckert and Posegga in the middle of the 90s [2, 3, 7]: they have proposed a very elegant and extremely efficient first-order theorem prover, called leanTAP, consisting of only five Prolog clauses. The basic idea of the “lean” methodology is “to achieve maximal efficiency from minimal means” [2] by writing short programs and exploiting the power of Prolog’s engine as much as possible. Moreover, it is straightforward to prove soundness and completeness of the theorem prover by exploiting the one to one correspondence between axioms/rules of SeqS22 and clauses of CondLean2022. We implement each component of a sequent by a list of formulas, partitioned into three sub-lists: atomic formulas, transitions and complex formulas. Atomic and complex formulas are implemented by a Prolog list of the form [x,a], where x is a Prolog N. Olivetti et al. A constant and a is a formula. A transition formula x −→ y is implemented by a Prolog list of the form [x,a,y]. Labels are implemented by Prolog constants. The sequent calculi are implemented by the predicate prove(Gamma, Delta, Labels, Rcond, LCond, Tree) which succeeds if and only if Γ Δ is derivable in SeqS, where Gamma and Delta are the lists implementing the multisets Γ and Δ, respectively and Labels is the list of labels introduced in that branch. As we will describe later on, arguments RCond and LCond are used in order to ensure the termination of the proof search by restricting the application of some crucial rules. Tree is an output term: if the proof search succeeds, it matches a Prolog representation of the derivation found by the theorem prover. Each clause of the prove predicate implements one axiom or rule of SeqS22. The theorem prover proceeds as follows. First of all, if Γ Δ is an axiom, then the goal will succeed immediately by using the clauses for the axioms. If it is not, then the first applicable rule is chosen. The ordering of the clauses is such that the application of the branching rules is postponed as much as possible. Concerning the rules for ⇒ on the right-hand side of a sequent, the rule (⇒ R), which introduces a new label in a backward proof search, is first applied to a sequent of the form Γ Δ, x : A ⇒ B. If this does not lead to a derivation, the new rule for CEM is then applied. As mentioned here above, arguments RCond and LCond are used in order to ensure the termination of the proof search by controlling the application of the rules (⇒ L) and (⇒ R): indeed, these rules copy the conditional formula x : A ⇒ B to which they are applied in their premises, therefore we need to avoid redundant applications that, otherwise, would lead to expand an infinite branch. For instance, RCond is a Prolog list containing all the formulas x : A ⇒ B to which the rule (⇒ R) has been already applied in the current branch: such a rule will be then applied to x : A ⇒ B only if it does not belong to the list RCond. A similar mechanism is implemented for extensions of CK, namely further suitable arguments are added to the predicate prove to keep track of the information needed to avoid useless and uncontrolled applications of the rules (MP), (ID), (CEM), and (CS), which copy their principal formulas in their premise(s). As an example, in systems with condition (CEM), a further argument is a Prolog list, called CEM, whose elements are pairs (y, x : A ⇒ B) representing that the rule (CEM) has been already applied (in a backward proof search) to a conditional formula x : A ⇒ B A by using the label y in the premises, i.e. by introducing x −→ y and y : B in the two premises of the rule. In order to apply the rule (CEM) to a formula x : A ⇒ B, the clause implementing it will choose a label y in the list Labels such that the pair (y, x : A ⇒ B) does not belong to the list CEM. Let us now present some clauses of CondLean2022. As a first example, the clause for the axiom checking whether the same atomic formula occurs in both the left and the right hand side of a sequent is implemented as follows: prove Labelled Sequent Calculi for Conditional Logics It is easy to observe that the rule succeeds when the same labelled formula F belongs to both the right and the left hand side of the sequent under investigation, completing the proof search: indeed, no recursive call to the predicate prove is performed, and the output term Tree matches a representation of a leaf in the derivation (tree(ax)). As another example, we show the code of the novel rule (CEM): prove([LitGamma,TransGamma,ComplexGamma],[LitDelta,TransDelta,ComplexDelta], Labels, RCond, LCond, CEM, tree(cem,SubTree1,SubTree2)):member([X,A => B],ComplexDelta), member([Y,Labels), \ +member([Y,[X,A => B]],CEM), !, put([Y,B],LitDelta,ComplexDelta,NewLitDelta,NewComplexDelta), prove([LitGamma,TransGamma,ComplexGamma], [LitDelta,[[X,A,Y] | TransDelta],ComplexDelta], Labels, RCond, LCond, [ [Y,[X,A => B]] | CEM], SubTree1), prove([LitGamma,TransGamma,ComplexGamma], [NewLitDelta,TransDelta,NewComplexDelta], Labels, RCond, LCond, [ [Y,[X,A => B]] | CEM], SubTree2). The predicate put is used to put [Y,B] in the proper sub-list of the antecedent. The recursive calls to prove implement the proof search on the two premises. As mentioned, in order to ensure termination, in line (∗) the theorem prover checks whether (CEM) has been already applied in the current branch by using the same label y to the conditional formula x : A ⇒ B: to this aim, CondLean2022 looks for the pair [Y,[X,A => B]] in the list CEM and, if needed, it avoids a further, useless application. In order to search a derivation of a sequent Γ Δ, the theorem prover proceeds as follows. First, if Γ Δ is an axiom, the goal will succeed immediately by using the clauses for the axioms. If it is not, then the first applicable rule is chosen, e.g. if ComplexDelta contains a formula [X,A -> B], then the clause for (→ R) rule is used, invoking prove on the unique premise of (→ R). The prover proceeds in a similar way for the other rules. The ordering of the clauses is such that the application of the branching rules is postponed as much as possible. In order to check whether a formula is valid in one of the considered system, one has just to invoke the following auxiliary predicate: pr(Formula) which wraps the prove predicate by a suitable initialization of its arguments. In order to provide a first evaluation of the performance of the theorem prover, we have tested both CondLean and CondLean2022 over (i) a set of formulas holding only in systems with CEM, as well as over (ii) a set of randomly generated formulas, either valid or not. We have observed that, over a set of valid formulas, the performances of CondLean2022 are improved of 20, 57% with respect to CondLean. As an example, running both the provers over the formula N. Olivetti et al. (A ⇒ (B1 ∨ . . . B5 )) ⇒ ((A ⇒ B1 ) ∨ . . . ∨ (A ⇒ B5 )) CondLean2022 is able to build a derivation in 94 ms, against the 266 ms needed by CondLean. Over randomly generated formulas, the statistics are even better: CondLean2022 provides an improvement of the performances of 48, 27% with respect to CondLean. The performance of CondLean2022 are promising, especially concerning all cases in which it has to answer no for a not valid formula: this is justified by the fact that CondLean has to make a great effort in order to explore the whole space of alternative choices in label substitution, needed in order to conclude the proof. The current version of the theorem prover CondLean2022 is available for free download at https:// gitlab2.educ.di.unito.it/pozzato/condlean4, where one can also find an updated version of CondLean in order to compare the two provers on common systems. 5 Conclusions and Future Works In this work we have introduced labelled sequent calculi for Conditional Logics with the selection function semantics, including the basic system CK as well as extensions with well established axioms ID, MP, CEM, and CS and all their combinations. As a difference with the seminal work in [19], we are also able to deal with systems combining the condition of the conditional third excluded middle (CEM) and conditional modus ponens (MP), where the conditional strong centering (CS) is a derived condition. The same extensions, with condition (CSO) in place of (CS), are considered in [1]. We have provided alternative calculi, where the original rule for CEM, based on an expensive mechanism of label substitution, has been replaced by a “standard” rule, called (CEM) inspired to the one introduced in [22] and specifically tailored for handling conditional formulas A ⇒ B in these systems. We have also implemented the proposed calculi and compared the obtained theorem prover, called CondLean2022, with its ancestor CondLean. The promising performance we obtained provide an empirical proof that the proposed system not only fills a gap in terms of considered Conditional Logics, but is also a concrete step in the direction of efficient theorem proving for them. We plan to extend our work in several directions. First, we aim at extending the calculi and the implementation to stronger Conditional Logics. Moreover, we aim at extending the theorem prover CondLean2022 towards a “concrete” theorem prover: in particular, we aim at implementing state of the art heuristics, data structures and suitable refinements, as well as a graphical web interface for it. Last, we aim at extending the set of formulas adopted in the performance evaluation. Acknowledgement. This work has been partially supported by the INdAM - GNCS Project cod. CUP_E55F22000270001 “LESLIE: LogichE non-claSsiche per tooL Intelligenti ed Explainable”. Labelled Sequent Calculi for Conditional Logics References 1. Alenda, R., Olivetti, N., Pozzato, G.L.: Nested sequent calculi for normal conditional logics. J. Log. Comput. 26(1), 7–50 (2016). https://doi.org/10.1093/logcom/ext034 2. Beckert, B., Posegga, J.: leanTAP: lean tableau-based deduction. J. Autom. Reason. 15(3), 339–358 (1995) 3. Beckert, B., Posegga, J.: Logic programming as a basis for lean automated deduction. J. Log. Program. 28 (3), 231–236 (1996) 4. Burgess, J.P.: Quick completeness proofs for some logics of conditionals. Notre Dame J. Formal Log. 22, 76–84 (1981) 5. Chellas, B.F.: Basic conditional logics. J. Philos. Log. 4, 133–153 (1975) 6. Delgrande, J.P.: A first-order conditional logic for prototypical properties. Artif. Intell. 33(1), 105–130 (1987) 7. Fitting, M.: leanTAP revisited. J. Log. Comput. 8(1), 33–47 (1998) 8. Friedman, N., Halpern, J.Y.: Plausibility measures and default reasoning. J. ACM 48(4), 648–685 (2001) 9. Gabbay, D.M., Giordano, L., Martelli, A., Olivetti, N., Sapino, M.L.: Conditional reasoning in logic programming. J. Log. Program. 44(1–3), 37–74 (2000) 10. Genovese, V., Giordano, L., Gliozzi, V., Pozzato, G.L.: Logics in access control: a conditional approach. J. Log. Comput. 24 (4), 705–762 (2014) 11. Giordano, L., Gliozzi, V., Olivetti, N.: Iterated belief revision and conditional logic. Stud. Log. 70(1), 23–47 (2002) 12. Giordano, L., Gliozzi, V., Olivetti, N.: Weak AGM postulates and strong Ramsey test: a logical formalization. Artif. Intell. 168(1–2), 1–37 (2005) 13. Giordano, L., Schwind, C.: Conditional logic of actions and causation. Artif. Intell. 157(1–2), 239–279 (2004) 14. Giordano, L., Gliozzi, V., Olivetti, N., Pozzato, G.L.: Analytic tableaux for KLM preferential and cumulative logics. In: Sutcliffe, G., Voronkov, A. (eds.) LPAR 2005. LNCS (LNAI), vol. 3835, pp. 666–681. Springer, Heidelberg (2005). https://doi.org/10.1007/11591191_46 15. Grahne, G.: Updates and counterfactuals. J. Log. Comput. 8(1), 87–117 (1998) 16. Kraus, S., Lehmann, D., Magidor, M.: Nonmonotonic reasoning, preferential models and cumulative logics. Artif. Intell. 44(1–2), 167–207 (1990) 17. Lewis, D.: Counterfactuals. Basil Blackwell Ltd. (1973) 18. Nute, D.: Topics in Conditional Logic. Reidel, Dordrecht (1980) 19. Olivetti, N., Pozzato, G.L., Schwind, C.B.: A sequent calculus and a theorem prover for standard conditional logics. ACM Trans. Comput. Log. (ToCL) 8(4), 22-es (2007) 20. Olivetti, N., Pozzato, G.L.: CondLean: a theorem prover for conditional logics. In: Cialdea Mayer, M., Pirri, F. (eds.) TABLEAUX 2003. LNCS (LNAI), vol. 2796, pp. 264–270. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45206-5_23 21. Olivetti, N., Pozzato, G.L.: CondLean 3.0: improving CondLean for stronger conditional logics. In: Beckert, B. (ed.) TABLEAUX 2005. LNCS (LNAI), vol. 3702, pp. 328–332. Springer, Heidelberg (2005). https://doi.org/10.1007/11554554_27 22. Panic, N., Pozzato, G.L.: Efficient theorem proving for conditional logics with conditional excluded middle. In: Calegari, R., Ciatto, G., Omicini, A. (eds.) Proceedings of the 37th Italian Conference on Computational Logic, Bologna, Italy, 29 June–1 July 2022. CEUR Workshop Proceedings, vol. 3204, pp. 217–231. CEUR-WS.org (2022). https://ceur-ws.org/ Vol-3204/paper_22.pdf 23. Schwind, C.B.: Causality in action theories. Electron. Trans. Artif. Intell. (ETAI) 3 (A), 27– 50 (1999) 24. Stalnaker, R.: A theory of conditionals. In: Rescher, N. (ed.) Studies in Logical Theory, pp. 98–112. Blackwell (1968) Deep Learning for ECoG Brain-Computer Interface: End-to-End vs. Hand-Crafted Features 1,2(B) ´ Maciej Sliwowski , Matthieu Martin1 , Antoine Souloumiac2 , Pierre Blanchart2 , and Tetiana Aksenova1 1 Univ. Grenoble Alpes, CEA, LETI, Clinatec, 38000 Grenoble, France [emailprotected], [emailprotected] 2 Universit´e Paris-Saclay, CEA, List, 91120 Palaiseau, France Abstract. In brain signal processing, deep learning (DL) models have become commonly used. However, the performance gain from using endto-end DL models compared to conventional ML approaches is usually significant but moderate, typically at the cost of increased computational load and deteriorated explainability. The core idea behind deep learning approaches is scaling the performance with bigger datasets. However, brain signals are temporal data with a low signal-to-noise ratio, uncertain labels, and nonstationary data in time. Those factors may influence the training process and slow down the models’ performance improvement. These factors’ influence may differ for end-to-end DL model and one using hand-crafted features. As not studied before, this paper compares the performance of models that use raw ECoG signals with time-frequency features-based decoders for BCI motor imagery decoding. We investigate whether the current dataset size is a stronger limitation for any models. Finally, obtained filters were compared to identify differences between hand-crafted features and optimized with backpropagation. To compare the effectiveness of both strategies, we used a multilayer perceptron and a mix of convolutional and LSTM layers that were already proved effective in this task. The analysis was performed on the long-term clinical trial database (almost 600 min of recordings over 200 days) of a tetraplegic patient executing motor imagery tasks for 3D hand translation. For a given dataset, the results showed that end-to-end training might not be significantly better than the hand-crafted features-based model. The performance gap is reduced with bigger datasets, but considering the increased computational load, end-to-end training may not be profitable for this application. Keywords: Deep learning · ECoG · Brain-computer interfaces Dataset size · Motor imagery · End-to-end In the last decade, deep learning (DL) models achieved extraordinary performance in a variety of complex real-life tasks, e.g., computer vision [4], natural c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 358–373, 2023. https://doi.org/10.1007/978-3-031-27181-6_25 End-to-End Deep Learning for ECoG Brain-Computer Interface language processing [2], compared to previously developed models. This was possible mainly thanks to the improvements of data processing units and, most importantly, increased dataset sizes [4]. Generally, in brain-computer interfaces (BCI) research, access to large databases of brain signals is limited due to the experimental and medical constraints as well as the immensity of paradigms/ hardware combinations. Given limited datasets, can we still train end-to-end (E2E) DL models for the medical BCI application as effectively as in computer vision? In 2019, Roy et al. [12] reported that the number of studies classifying EEG signals with deep learning using hand-crafted features (mainly frequency domain) and raw EEG signals (end-to-end) was similar. This indicates that decoding raw EEG signals, without feature extraction, is indeed possible. However, in many articles, researchers decided to use harder to design hand-crafted features. While end-to-end models dominated computer vision, in brain signals processing, it is still common to use features extracted as an input to the DL models. It is unclear whether specific signal characteristics cause this, e.g., nonstationarity in time making the creation of a homogeneous dataset impractical, low signal-to-noise ratio complicating the optimization process and favoring overfitting, labels uncertainty originating from human-in-the-loop experimental setup, or researchers’ bias toward solutions better understood and more explainable. Most studies do not directly compare DL using end-to-end and hand-crafted features approaches. Usually, DL architectures are compared with each other and with an additional ‘traditional’ ML pipeline, e.g., filter-bank common spatial pattern (FBCSP) in [15], xDAWN and FBCSP in [5], SVM and FBCSP in [17]. In Fig. 1, we aggregated studies analyzed1 by Roy et al. [12] to present the accuracy improvement of the best proposed DL model in every article compared to the ‘traditional’ baseline depending on the recording time and the number of examples in the dataset. The gap between performance improvement of DL compared to the ‘traditional’ baseline increases with the dataset size (except for the last points on the plot, which contain significantly fewer studies). In the right plot, the difference between models using raw EEG and frequency domain features increases which may exhibit a boost of end-to-end models with access to bigger datasets compared to hand-crafted features. As the proposed DL models are usually compared to the baseline, the boost of end-to-end models cannot be clearly stated because the accuracy difference depends strongly on the ‘traditional’ baseline model performance and the particular task tackled in the study. While EEG and ECoG signals share many characteristics—both are multichannel temporal signals with information encoded in frequency and space, with low signal-to-noise ratio and noisy labels—there are also differences, e.g., a higher spatial resolution of ECoG, higher signal-to-noise ratio and higher contribution of informative high gamma band (>70 Hz). In motor imagery ECoG decoding, end-to-end DL is not commonly used. Instead, ‘traditional’ ML classifiers are 1 limited to the articles that contained all the required information, code adapted from [12]. ´ M. Sliwowski et al. Fig. 1. Binned average accuracy difference between best proposed DL model and ‘traditional’ baseline on EEG datasets. Error bars denote one standard deviation of the values in the bin. Bins are equal in size on a logarithmic scale. Points x-axis position denotes the average dataset size in a bin. usually preceded by a feature extraction step creating brain signals representation, typically in the form of time-frequency features, containing information about power time course in several frequency bands [8,14] or focused only on low-frequency component (LFC)/Local Motor Potential (LMP) [14] (detailed analysis can be found in [19]). However, a successful application of an end-to-end DL model to motor imagery decoding of finger movements trajectory from ECoG was performed with convolutional layers filtering the raw signal both in temporal and spatial domains followed by LSTM layers [20]. Nevertheless, an average improvement from training the weights compared to fixed hand-crafted features can be estimated as 0.022 ± 0.0393 of Pearson r correlation coefficient, which is relatively small, with 66% of cases noticeable improvement from end-to-end training. As this was not studied before, we investigated the differences in data requirements between an end-to-end model and one using hand-crafted features on a long-term clinical trial BCI dataset of 3D target reach task. Unique long-term recordings (several months of experiments, more than 600 min duration in total, compared to few minutes of ECoG recording available in previous studies, e.g., [20]) allowed us to explore the relationship between dataset size and the type of feature used for ECoG signal decoding. In this study, we used architectures previously applied to the ECoG dataset for decoding motor imagery signals with hand-crafted timefrequency features as input [16]. In addition, we optimized the temporal filtering layer with backpropagation seeking a more efficient set of filters that were initialized to reproduce continuous wavelet transform. We also investigated whether both approaches react differently to training dataset perturbations which may be the case due to distinct model properties and may influence the choice of optimal data processing pipeline for ECoG BCI. 2 2.1 Methods Dataset The dataset used in this study was collected as a part of the clinical trial ‘BCI and Tetraplegia’ (ClinicalTrials.gov identifier: NCT02550522, details in [1]) approved End-to-End Deep Learning for ECoG Brain-Computer Interface by the ethical Committee for the Protection of Individuals (Comit´e de Protection des Personnes-CPP) with the registration number: 15-CHUG-19 and the Agency for the Safety of Medicines and Health Products (Agence nationale de s´ecurit´e du m´edicament et des produits de sant´e—ANSM) with the registration number: 2015-A00650-49 and the ethical Committee for the Protection of Individuals (Comit´e de Protection des Personnes—CPP) with the registration number: 15CHUG-19. In the experiment, a 28-years-old tetraplegic patient after spinal cord injury was asked to move the hands of a virtual avatar displayed on a screen (see Fig. 2) using motor imagery patterns—by repeatedly imaging/attempting hand/fingers/arm movements (without actual movements) that influence brain activity in the motor cortex. These changes were then recorded with two WIMAGINE [10] implants placed Fig. 2. Screenshot from the virtual over the primary motor and sensory cortex environment. The patient is asked to bilaterally. Each implant consisted of 8 × 8 reach the yellow square (target) with grid of electrodes with recording performed the left hand (effector) using motor imagery. (Color figure online) using 32 electrodes selected in a chessboardlike manner due to limited data transfer with a sampling frequency equal to 586 Hz. Signals from implants were transferred to the decoding system that performed online predictions. First, one out of 5 possible states (idle, left and right hand translation, left and right wrist rotation) was selected with a state decoder. Then, for every state (except idle), a multilinear REW-NPLS model [3] updated online was used to predict 3D movements or 1D wrist rotation. The dataset consisted of 44 experimental sessions recorded over more than 200 days. It constitutes 300 and 284 min for left and right hand translation, respectively. 2.2 Data Representation and Problem From the recorded signals, we extracted two datasets for left and right hand translation. The raw signal representation was created from 1-second long windows of ECoG signal with 90% overlap. Every observation Xi ∈ R64×590 contained 590 samples2 for each of the 64 channels corresponding to the number of electrodes recording the signal. Every signal window Xi was paired with the corresponding desired trajectory yi ∈ R3 that the patient was asked to follow, i.e., the straight line connecting the tip of the hand to the target. The trajectories were computed in the 3D virtual avatar coordinate system mounted in the pelvis of the effector. Before feeding the data to the models, datasets were cleaned from data loss artifacts that were not caught during the online recordings. Additionally, observations for which the predicted and desired state did not match due to state 2 instead of 586 samples due to 100 ms buffer during recording. ´ M. Sliwowski et al. decoder errors were also removed to reduce the number of mislabelled observations (e.g., when the patient was asked to control left hand translation but instead left wrist was rotating). Then, all the models were trained to find the mapping between Xi ECoG signal and yi desired trajectories that the hand should follow in the case of optimal prediction. As a performance metric we used cosine similarity (Eq. 1) measuring ˆ i and the desired trajectory yi . cosine of the angle αi between prediction y ˆi) = CS(yi , y ˆi yi · y = cos αi yi · ˆ yi ˆ i ) = 1 − CS(yi , y ˆ i ) was used as optimization Cosine loss defined as CL(yi , y objective. 2.3 Hand-Crafted Features Extraction and DL Optimization ‘Traditional’ hand-crafted features were extracted using complex continuous wavelet transform (CWT). CWT was performed with Morlet wavelets with central frequencies ranging from 10 to 150 Hz (broad band as ECoG contains higher frequencies than EEG) with a step of 10 Hz. Each wavelet support consisted of 118 samples (0.2 s) centered on its maximum value. Features were obtained by applying CWT on one-second-long signals, computing the module of the complex signals, and performing an average pooling of 0.1 s. The resulting feature tensor was of shape 64 × 15 × 10, with dimensions corresponding to channels, frequency bands, and time steps. CWT can be represented as a convolution between a set of filters and a signal in the temporal domain. In the standard case, the filters are fixed and constitute a basis for feature extraction where every filter detects brain activity in a different frequency band. As every spatial channel is convolved separately in time, we obtained a time-frequency-space representation of the ECoG signal (see Table 1 for feature extractor architecture specification). Here, we propose to adjust the filters during backpropagation together with all other parameters of the models. In the first scenario, the filters were initialized to Morlet wavelets with 15 central frequencies, resulting in 30 kernels (real and imaginary parts). Note that at the beginning of training, the first layer reproduces ‘traditional’ hand-crafted feature extraction. The filters were fixed for 5 epochs of so-called pre-training, then they were unfreezed and optimized freely (without any additional constraints imposing specific filter shape) for the following 50 epochs. The pre-training was used to not distort the wavelets drastically in the first epochs when parameters of the rest of the network are randomly initialized. We also evaluated random weights initialization from uniform distribution as a solution that does not incorporate prior knowledge about the system. End-to-End Deep Learning for ECoG Brain-Computer Interface In the second scenario, an alternative approach was used to maintain the wavelet structure by optimizing only the parameters used to generate the wavelets instead of modifying all filters’ parameters. In our case, the function generating the wavelets was defined as: 2 1 1 Ψ (t, f ) = √ e−(tf ) e2iπtf π fs where central frequency parameter f defines the center of the frequency band analyzed by the wavelet and fs is the signal sampling frequency. In the central frequency optimization (CFO) scenario, we optimized only the central frequency f parameters (one per wavelet), so the filters after training are still from the Morlet wavelets family. Table 1. The architecture used to reproduce hand-crafted feature extraction with CWT. Only one convolutional layer (conv time) was used in computations according to the performed experiment E2E/E2E CFO. Layer Kernel shape Output shape Param # Mult-adds [200, 1, 590, 8, 8] Conv time [1, 30, 118, 1, 1] [200, 30, 590, 8, 8] 3,570 Conv time CFO [1, 30, 118, 1, 1] [200, 30, 590, 8, 8] 15 [200, 30, 590, 8, 8] – Sum real and imaginary – [200, 15, 590, 8, 8] – Square root [200, 15, 590, 8, 8] – [200, 15, 590, 8, 8] – [200, 15, 10, 8, 8] [200, 15, 10, 8, 8] DL Architectures In this study, we used two architectures proposed in [16], i.e., CNN+LSMT+MT, which showed the best performance, and MLP, which was the simplest approach. In the baseline approach, the hand-crafted feature extraction was followed with fully connected or convolutional layers. When optimizing the first convolutional layer, we kept the rest of the network the same to isolate the influence of the training feature extraction step. Details of the tested DL architectures are described below and in [16]. Additionally, we used ShallowFBCSPNet and Deep4Net [15] as end-to-end DL baseline. MLP. The most basic DL architecture evaluated in the study was multilayer perceptron (MLP), consisting of two fully connected layers. Dropout and batch normalization layers were placed between fully connected layers for stronger regularization (see Table 2). ´ M. Sliwowski et al. Table 2. MLP architecture from [16]. Layer Kernel shape Output shape Param # Mult-adds [200, 9600] Fully connected [9600, 50] [200, 50] [200, 50] [200, 50] Fully connected [50, 50] [200, 50] [200, 50] [200, 50] [200, 50] [200, 3] Fully connected [50, 3] CNN+LSTM+MT. In the CNN+LSTM+MT architecture, CWT features were further analyzed with 3 × 3 convolutional layers in space (electrodes organized on an array 4 × 8 reflecting positions of electrodes on implants). After two convolutional layers, two LSTM layers were applied to analyze temporal information from 10 timesteps. Finally, every output of the last LSTM layer was used for training to compute loss based on all predicted and ground truth trajectories corresponding to 1 s (10 timesteps) of signal analyzed (see Table 3). Table 3. CNN+LSTM+MT architecture from [16]. Layer Kernel shape Output shape Param # Mult-adds [200, 15, 8, 8, 10] – Input per implant [200, 15, 8, 4, 10] – Conv space [15, 32, 3, 3, 1] [200, 32, 6, 4, 10] 4,352 [200, 32, 6, 4, 10] – 208,896,000 – [200, 32, 6, 4, 10] 64 [200, 32, 6, 4, 10] – Conv space [32, 64, 3, 3, 1] [200, 64, 4, 2, 10] 18,496 – 295,936,000 [200, 64, 4, 2, 10] – [200, 64, 4, 2, 10] – [200, 10, 50] [200, 10, 3] Models Training and Hyperparameters. For every model evaluation, we used 90% and 10% of the training dataset for training and validation, respectively. The validation dataset was used for early stopping after 20 epochs without improvement. All the models used a fixed set of hyperparameters, i.e., learning rate of 0.001, weight decay of 0.01, batch size of 200, and ADAM optimizer [9]. To train DL models we used PyTorch [11], skorch [18], and braindecode [15]. End-to-End Deep Learning for ECoG Brain-Computer Interface Offline Experiments First, we computed results in a classical evaluation scenario, i.e., train/valid/test split. We used the calibration dataset (first six sessions, approximately 10% of the dataset) as the training dataset. The rest of the data (online evaluation dataset) was used as the test set. Additionally, we gradually increased the training dataset size from one session up to 22 with a step of 2. As different models may have different dataset requirements, we wanted to verify whether collecting more data can be more profitable for one of the evaluated optimization/architecture combinations. To investigate the possible influence of end-to-end learning on models’ robustness against data mislabelling, we perturbed the dataset to make training more challenging. In the BCI, part of observations is often mistakenly labeled due to lack of subject attention, tiredness, experimental setup, etc. Therefore, we randomly selected a fraction of observations in which targets were shuffled between samples so they no longer have a meaningful connection with the ECoG signal while preserving the same distribution. At the same time, we kept the test set unchanged. Table 4. Test cosine similarity computed in the train-valid-test split scenario. Values are sorted by average performance and represent the mean and standard deviation of 5 runs. Left hand E2E Right hand 0.304 ± 0.005 0.266 ± 0.020 0.297 ± 0.008 0.270 ± 0.011 E2E CNN+LSTM+MT 0.289 ± 0.007 0.273 ± 0.015 E2E MLP CFO 0.254 ± 0.012 0.230 ± 0.013 0.247 ± 0.023 0.232 ± 0.005 E2E MLP 0.243 ± 0.014 0.234 ± 0.020 ShallowFBCSPNet [15] 0.235 ± 0.010 0.236 ± 0.011 E2E CNN+LSTM+MT random init 0.216 ± 0.008 0.230 ± 0.020 E2E MLP random init 0.181 ± 0.029 0.223 ± 0.008 Deep4Net [15] 0.111 ± 0.021 0.259 ± 0.013 We started the analysis by comparing different model training scenarios when trained on the first six sessions (online calibration dataset). The results for the train/test split can be found in Table 4. Differences between scenarios are rather small, with only small performance improvement coming from full end-to-end ´ M. Sliwowski et al. optimization. The best performance was achieved by models using CFO. However, the gap between the hand-crafted features approach and CFO is relatively small, considering standard deviations of the computed values. The worst performance was achieved for Deep4Net (especially low performance for the left hand dataset) and both MLP and CNN+LSTM+MT models with random weights initialization, suggesting the high importance of the prior signal processing knowledge used to define the wavelet shape of the filters at the beginning of the training. Fig. 3. Difference between cosine similarity of end-to-end model and its counterpart using hand-crafted features. The bold line denotes the moving average with a window of size 3. We did not notice significant improvements coming from end-to-end optimization, so we wanted to verify the hypothesis of different dataset size requirements for different optimization methods. Therefore, the differences between endto-end models and their hand-crafted features counterparts for several training dataset sizes are presented in Fig. 3. In some cases, end-to-end models increase the cosine similarity faster than the models using fixed features, so the gap between models can be reduced for approaches using random weights initialization. However, only for models initialized to wavelets and optimized directly, an improvement over hand-crafted features can be observed for some points (up to 0.05 of cosine similarity for the right hand dataset). When comparing CFO and standard E2E optimization in Fig. 4, higher effectiveness of CFO for small training datasets can be observed. CFO may limit overfitting as the functions represented by the convolutional filters are constrained to the wavelet family. It may be interpreted as an additional optimization constraint imposed on model parameters. Note that diminished gap between CFO and standard end-to-end in Fig. 4 show only relative decrease of CFO performance. End-to-End Deep Learning for ECoG Brain-Computer Interface Fig. 4. Difference between cosine similarity of the CFO model and its counterpart using constraint-free end-to-end optimization. The bold line denotes the moving average with a window of size 3. Filters Visualization Fig. 5. Visualized filters before (blue) and after (red) training for the models with parameters optimized freely. Note that only real part of the wavelet was visualized for clarity. Plot titles denote central wavelet frequency at initialization. (Color figure online) We visualized the filters before and after training to analyze the characteristics of learned feature extraction. In Fig. 5, we presented the filters modified without additional constraints. The biggest change can be observed in the central frequencies between 30 Hz and 80 Hz. In most cases, the initial central frequency was maintained, while the wavelets got extended with a signal similar to the sine wave in the central wavelet frequency. This could indicate the importance of information about frequencies from which the signal is composed. At the ´ M. Sliwowski et al. same time, extending wavelets reduces the temporal resolution of the signals. The changes in the high-frequency wavelets (>100 Hz) are less significant, and the pattern of extending wavelets is no longer visible. Instead, components of significantly lower frequencies and smaller amplitude were added. In Fig. 6, we visualized filters before and after optimization when the first convolutional layer was initialized to random. As filters initialized to random were much harder to analyze visually, we presented them in the form of power spectra, so the changes in the filtered frequencies could be better visible. All filters have a maximum power peak lower than 65 Hz with 40% of maxima contained in the frequency range 25–30 Hz. Compared to hand-crafted features, end-to-end filters initialized to random covered only approximately half of the frequency band analyzed by the fixed hand-crafted feature extraction pipeline. However, in the higher frequencies, there are smaller peaks which can also contribute to the extracted representation and may cover the missing frequency band. Fig. 6. Power spectra of filters before (blue) and after (red) training for convolutional layer initialized to random. The plots denoted frequencies for which maximum power spectra were observed before and after training. (Color figure online) In Fig. 7a, we presented the difference between initialized central wavelet frequency and the one obtained after the training. We observed a decrease in almost all frequencies when training the models. The decrease was higher for higher frequencies. This may suggest that more information can be extracted from lower frequencies. However, in our preliminary results, we noticed that adapting the learning rate for the convolutional layer may significantly change the frequency behavior (see Fig. 7b), which should be taken into account when analyzing the results. This may lead to different changes in the central frequencies than in the End-to-End Deep Learning for ECoG Brain-Computer Interface base model. The gradient was increased 150 times by squeezing central frequencies from 10–150 Hz to 0–1. In the case of initialization to wavelet, a network may start the training near a local minimum found by the manual design of feature extraction that can be hard to move out. Setting a higher learning rate may enable reaching different regions on the loss function surface. The performance achieved with a higher learning rate was similar to the standard CFO results with a cosine similarity of 0.283 ± 0.014 (left hand) and 0.270 ± 0.011 (right hand) for CNN+LSTM+MT and 0.262 ± 0.01 (left hand) and 0.227 ± 0.007 (right hand) for MLP. Fig. 7. Difference between central wavelet frequencies before and after CFO. Models for left hand translation are presented in the left column, for the right hand in the right column. Note that the scale is different for (a) and (b). Target Perturbation In the case of perturbed ground-truth (Fig. 8), CNN+LSTM+MT models were more robust to noise in the targets with increased stability (especially for the left hand) of hand-crafted features and CFO models compared to models optimized freely. On the other hand, in the case of MLP models, almost no differences between different optimization methods in the influence of noise on the performance were ´ M. Sliwowski et al. Fig. 8. Influence of noise in the targets on models’ performance. Noise level indicates the fraction of observations with perturbed labels. We proposed several approaches for the end-to-end optimization of deep learning ECoG decoders. However, in this study, we did not observe improvement from end-to-end optimization, especially when no prior knowledge was used for filter initialization. This confirms the usefulness of hand-crafted features and years of neuroscientific signal processing while leaving doors open to more sophisticated end-to-end models. Firstly, deeper models with more advanced DL mechanisms [6,13] should be evaluated as they may allow for the extraction of more complex representations and thus outperform hand-crafted features. Secondly, machine learning methods for robust learning may be evaluated, e.g., learning from noisy input data, noisy labels, and out-of-distribution samples [7]. Those can particularly tackle problems coming from specific recording/experimental circumstances. The reasoning behind our study is focused on the specificity of ECoG brain signals and the adequacy of selected DL methods to the problem. The specificity originates from experimental constraints caused by the presence of a human in the loop but also signals characteristics, hardware capabilities, etc. It results in a distorted dataset with a low signal-to-noise ratio, short signal stationarity interval, and uncertain labels. This is quite different from computer vision problems, usually with well-defined labels and images understandable with a naked eye. Improving information extraction from noisy data may be especially important in the light of increased robustness to noise in targets shown by the CNN+LSTM+MT model compared to MLP. Using all 10 targets recorded during a 1-s window decreases the influence of single perturbed points on the performance because the information can be efficiently extracted even for 40% or 60% of perturbed targets. In this case, the CNN+LSTM+MT model using handcrafted features maintains high performance for a higher noise level than the end-to-end model. However, an important point in the discussion is that our dataset, even after data cleaning, still contains a significant, unknown amount of observations with incorrect labels. Thus, in Fig. 8, a noise level equal to zero End-to-End Deep Learning for ECoG Brain-Computer Interface corresponds to an unknown noise level in labels originating from the experimental setup. Thus, generative models should be used to create datasets with a known level of noise and analyze the influence of perturbations on the performance in the case of less distorted datasets. All the results were computed offline on datasets recorded with only one patient. These kinds of datasets are hardly accessible due to experimental and legal constraints. It makes the generalization of the results to other patients and datasets hard to estimate. Thus, more simulations should be performed to confirm our conclusions, ideally with more patients and tasks. This should also include hyperparameters search, like learning rate, batch size, weight decay, as those could vary between different approaches. However, performing hundreds of evaluations is time-consuming, and the problem is magnified in the case of endto-end models due to increased computational load. Our study focused on feature extraction based on wavelet transform, which was previously used in this problem. As we optimized the parameters of the wavelet transform without changing other parts of the model, we isolated the influence of end-to-end optimization on models’ performance. While this simplified the problem, our study did not evaluate other feature extraction pipelines that could behave differently. Thus, an extended analysis of several feature extraction pipelines compared to their end-to-end counterparts would allow for broader generalization and therefore is of great interest. While this article and [20] analyzed ECoG signals, targets used for training models in [20] were actual fingers trajectories recorded while subjects performed real movements. In our case, targets are much noisier due to the lack of labeling based on the hand movements of a tetraplegic patient. This may favor handcrafted features, as could be seen for CNN+LSTM+MT in Fig. 8. Finally, our conclusions are in line with [20] who observed relatively small improvement from optimizing hand-crafted features and worse performance/longer training time when initializing the model to random. In our case, end-to-end models achieved the same performance as models using CWT features only with smart weights initialization, which emphasizes the importance of prior signal processing knowledge in designing DL for ECoG analysis. Acknowledgement. Clinatec is a Laboratory of CEA-Leti at Grenoble and has statutory links with the University Hospital of Grenoble (CHUGA) and University Grenoble Alpes (UGA). This study was funded by CEA (recurrent funding) and the French Ministry of Health (Grant PHRC-15-15-0124), Institut Carnot, Fonds de Dotation Clinatec. Matthieu Martin was supported by the cross-disciplinary program on Numerical ´ Simulation of CEA. Maciej Sliwowski was supported by the CEA NUMERICS program, which has received funding from European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 800945— NUMERICS—H2020-MSCA-COFUND-2017. ´ M. Sliwowski et al. References 1. Benabid, A.L., et al.: An exoskeleton controlled by an epidural wireless brain-machine interface in a tetraplegic patient: a proof-of-concept demonstration. Lancet Neurol. 18(12), 1112–1122 (2019). https://doi.org/10.1016/S14744422(19)30321-7 2. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein, J., Doran, C., Solorio, T. (eds.) Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/n19-1423 3. Eliseyev, A., et al.: Recursive exponentially weighted N-way partial least squares regression with recursive-validation of hyper-parameters in brain-computer interface applications. Sci. Rep. 7(1), 16281 (2017). https://doi.org/10.1038/s41598017-16579-9 4. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 25. Curran Associates, Inc. (2012). http://proceedings.neurips.cc/paper/2012/file/ c399862d3b9d6b76c8436e924a68c45b-Paper.pdf 5. Lawhern, V.J., Solon, A.J., Waytowich, N.R., Gordon, S.M., Hung, C.P., Lance, B.J.: EEGNet: a compact convolutional neural network for EEG-based braincomputer interfaces. J. Neural Eng. 15(5), 056013 (2018). https://doi.org/10.1088/ 1741-2552/aace8c 6. Lee, Y.E., Lee, S.H.: EEG-transformer: self-attention from transformer architecture for decoding EEG of imagined speech (2021). https://doi.org/10.48550/ARXIV. 2112.09239 7. Li, J., Xiong, C., Hoi, S.C.: Learning from noisy data with robust representation learning. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9465–9474 (2021). https://doi.org/10.1109/ICCV48922.2021.00935 8. Liang, N., Bougrain, L.: Decoding finger flexion from band-specific ECoG signals in humans. Front. Neurosci. 6, 91 (2012). https://doi.org/10.3389/fnins.2012.00091 9. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2019). http://openreview.net/forum? id=Bkg6RiCqY7 10. Mestais, C.S., Charvet, G., Sauter-Starace, F., Foerster, M., Ratel, D., Benabid, A.L.: WIMAGINE: wireless 64-channel ECoG recording implant for long term clinical applications. IEEE Trans. Neural Syst. Rehabil. Eng. 23(1), 10–21 (2015). https://doi.org/10.1109/TNSRE.2014.2333541 11. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019). https://papers.neurips.cc/ paper/9015pytorch-an-imperative-style-high-performance-deep-learning-library.pdf 12. Roy, Y., Banville, H., Albuquerque, I., Gramfort, A., Falk, T.H., Faubert, J.: Deep learning-based electroencephalography analysis: a systematic review. J. Neural Eng. 16(5), 051001 (2019). https://doi.org/10.1088/1741-2552/ab260c End-to-End Deep Learning for ECoG Brain-Computer Interface 13. Santamar´ıa-V´ azquez, E., Mart´ınez-Cagigal, V., Vaquerizo-Villar, F., Hornero, R.: EEG-inception: a novel deep convolutional neural network for assistive ERP-based brain-computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 28(12), 2773– 2782 (2020). https://doi.org/10.1109/TNSRE.2020.3048106 14. Schalk, G., et al.: Decoding two-dimensional movement trajectories using electrocorticographic signals in humans. J. Neural Eng. 4(3), 264–275 (2007). https:// doi.org/10.1088/1741-2560/4/3/012 15. Schirrmeister, R.T., et al.: Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 38(11), 5391–5420 (2017). https://doi.org/10.1002/hbm.23730 ´ 16. Sliwowski, M., Martin, M., Souloumiac, A., Blanchart, P., Aksenova, T.: Decoding ECoG signal into 3D hand translation using deep learning. J. Neural Eng. 19(2), 026023 (2022). https://doi.org/10.1088/1741-2552/ac5d69 17. Tabar, Y.R., Halici, U.: A novel deep learning approach for classification of EEG motor imagery signals. J. Neural Eng. 14(1), 016003 (2016). https://doi.org/10. 1088/1741-2560/14/1/016003 18. Tietz, M., Fan, T.J., Nouri, D., Bossan, B., skorch Developers: SKORCH: a scikitlearn compatible neural network library that wraps PyTorch (2017). http://skorch. readthedocs.io/en/stable/ 19. Volkova, K., Lebedev, M.A., Kaplan, A., Ossadtchi, A.: Decoding movement from electrocorticographic activity: a review. Front. Neuroinform. 13, 74 (2019). https://doi.org/10.3389/fninf.2019.00074 20. Xie, Z., Schwartz, O., Prasad, A.: Decoding of finger trajectory from ECoG using deep learning. J. Neural Eng. 15(3), 036009 (2018). https://doi.org/10.1088/17412552/aa9dbe Quantum Circuit Compilation for the Graph Coloring Problem Angelo Oddi1(B) , Riccardo Rasconi1 , Marco Baioletti2(B) , Vieri Giuliano Santucci1 , and Hamish Beck3 1 Institute of Cognitive Sciences and Technologies (ISTC-CNR), Rome, Italy {angelo.oddi,riccardo.rasconi,vieri.santucci}@istc.cnr.it 2 University of Perugia, Perugia, Italy [emailprotected] Advanced Concepts Team, ESA European Space Research and Technology Centre, Noordwijk, The Netherlands Abstract. In this work we investigate the performance of greedy randomised search (GRS) techniques to the problem of compiling quantum circuits that solve instances of the Graph Coloring problem. Quantum computing uses quantum gates that manipulate multi-valued bits (qubits). A quantum circuit is composed of a number of qubits and a series of quantum gates that operate on those qubits, and whose execution realises a specific quantum algorithm. Current quantum computing technologies limit the qubit interaction distance allowing the execution of gates between adjacent qubits only. This has opened the way to the exploration of possible techniques aimed at guaranteeing nearest-neighbor (NN) compliance in any quantum circuit through the addition of a number of so-called swap gates between adjacent qubits. In addition, technological limitations (decoherence effect) impose that the overall duration (i.e., depth) of the quantum circuit realization be minimized. One core contribution of the paper is the application of an upgraded version of the greedy randomized search (GRS) technique originally introduced in the literature that synthesises NN-compliant quantum circuits realizations, starting from a set of benchmark instances of different size belonging to the Quantum Approximate Optimization Algorithm (QAOA) class tailored for the Graph Coloring problem. We propose a comparison between the presented method and the SABRE compiler, one of the best-performing compilation procedures present in Qiskit, an open-source SDK for quantum development, both from the CPU efficiency and from the solution quality standpoint. Keywords: Randomized search · Quantum circuit compilation Planning · Scheduling · Optimization c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 374–386, 2023. https://doi.org/10.1007/978-3-031-27181-6_26 Quantum Circuit Compilation for the Graph Coloring Problem Quantum algorithms process information represented as qubits, the basic unit of quantum information, and quantum operations (called gates) are the building blocks of quantum algorithms. In order to be run on real quantum computing hardware, quantum algorithms must be compiled into a set of elementary machine instructions (or gates). Since currently available quantum devices suffer a number of technological problems such as noise and decoherence, it is important that the process that carries out the quantum computation be somehow adapted to the physical limitations of the quantum hardware of interest, by means of a proper compilation. For practical applications, it is essential to make quantum computation able to tackle problem instances of more and more realistic size. To this aim, the ability to produce compiled quantum circuits of good quality is of paramount importance. In this paper, we focus our efforts on the so-called Quantum Alternate Operator Ansatz (QAOA) algorithms [9] applied on the gate-model noisy intermediate-scale quantum (NISQ) processor units [18]. Our approach intends to improve over the compilation algorithms employed in the Qiskit quantum computing software development kit [1], and devise solutions that are easily adaptable to different classes of problems. In the NISQ era, the leading quantum processors are characterized by about 50 to a few hundred qubits but are not advanced enough to reach fault tolerance, nor large or sophisticated enough to continuously implement quantum error correction. The term “noisy” refers to the fact that quantum processors are very sensitive to the environment and may lose their quantum state due to quantum decoherence. The term “intermediatescale” refers to the relatively small number of qubits and moderate gate fidelity. The term NISQ algorithms refers to algorithms designed for NISQ quantum processors. For example, the Variational Quantum Eigensolver (VQE) or the Quantum Alternate Operator Ansatz (QAOA) (and its sub-class, the Quantum Approximate Optimization Algorithm [6,8]) are hybrid algorithms that use NISQ devices but reduce the calculation load by implementing some parts of the algorithm in usual classical processors. Usually, NISQ algorithms require error mitigation techniques to recover useful data, which however make use of precious qubits to be implemented. Thus, the creation of a computer with tens of thousands of qubits and sufficient error correction capabilities would eventually end the NISQ era. These “beyond-NISQ” devices would be able, for example, to implement Shor’s algorithm, for very large numbers, and break RSA encryption. Until that point however, the need to produce circuits runnable in the current (or near-future) quantum architectures in a reasonably reliable manner (i.e., counting on noise minimization techniques rather than on error-correcting techniques) will stand. Hence, the need to provide quantum circuit compilation procedures that minimize the effects of decoherence by minimizing the circuit’s depth. In this work, we investigate the performance of an upgraded version of the greedy randomized search (GRS) technique [10,16,19] originally introduced in [17] applied to the problem of compiling quantum circuits to emerging quantum A. Oddi et al. hardware. In particular, we experiment on a set of benchmark instances belonging to the Quantum Alternate Operator Ansatz (QAOA) class tailored for the Graph Coloring problem, and devised to be executed on top of a hardware architecture inspired by Rigetti Computing Inc. [20]. We compare our algorithm’s performance against the SABRE compiler [13], one of the best compilers present in the Qiskit framework, and demonstrate the superiority of our approach. The paper is organized as follows. Section 2 provides some background information. Section 3 formally describes the problem, whereas Sect. 4 describes the proposed heuristic solving algorithms and the Greedy Randomised Search approach. Finally, an empirical comparison with the results obtained from the SABRE compiler [1] and some conclusions close the paper. Quantum computing is based on the manipulation of qubits rather than conventional bits; a quantum computation is performed by executing a set of quantum gates on the qubits. A gate whose execution involves k qubits is called k-qubit quantum gate. Current NISQ devices only allow the direct execution of 1-qubit and 2-qubit quantum gates. In order to be executed, a quantum circuit must be mapped on a quantum chip which determines the circuit’s hardware architecture specification [14]. The chip can be seen as an undirected multigraph whose nodes represent the qubits (quantum physical memory locations) and whose edges represent the types of gates that can be physically implemented between adjacent qubits of the physical hardware (see Fig. 1 as an example of three chip topologies of increasing size). Since a 2-qubit gate requiring two specific qstates can only be executed on a pair of adjacent (NN) qubits, the required qstates must be made nearest-neighbors prior to gate execution. NN-compliance can be obtained by adding a number of swap gates so that every pair of qstates involved in the quantum gates can be eventually made adjacent, allowing all gates to be correctly executed. Figure 2 shows an example of quantum circuit that only uses the first three qubits of the chip (N = 8) of Fig. 1, which assumes that qstates q1 , q2 and q3 are initially allocated to qubits n1 , n2 and n3 . The circuit is composed of four generic 2-qubit gates (i.e., CNOT gates) and one generic 1-qubit gate (i.e., the Hadamard gate). Note that the circuit is not NN-compliant as the last gate involves two qstates resting on to two non-adjacent qbits (n1 and n3 ). The right side of Fig. 2 shows the same circuit made NN-compliant through the insertion of a swap gate. In this work, we tackle the compilation problem of quantum circuit following a scheduling-oriented formulation, as described in the next sections. In particular, our approach is related to a body of heuristic efforts available in the current literature, see [11,12] for two relatively recent representative works. Even though these papers pursue the same objective, i.e., optimizing the realization of nearestneighbor compliant quantum circuits, they focus on quantum circuits characterized by pre-ordered non-commutative gates. On the contrary, our approach Quantum Circuit Compilation for the Graph Coloring Problem Fig. 1. Three quantum chip designs characterized by an increasing number of qubits (N = 8, 21, 40) inspired by Rigetti Computing Inc. Every qubit is located at a different location (node), and the integers at each node represent the qubit’s identifier. leverages the parallel nature of the considered planning/scheduling problem, and proposes a greedy randomized algorithm that exploits gate commutativity through a heuristic ranking function for quantum gate selection. The QCC Problem The problem tackled in this work consists in compiling a given quantum circuit on a specific quantum hardware architecture. To this aim, we focus on the Quantum Alternating Operator Ansatz (QAOA) framework [9] a generalization of the Quantum Approximate Optimization Algorithm (QAOA) circuits [6,8], a class of hybrid quantum algorithms used in the literature to solve problems like the Max-Cut, while the Graph Coloring problem has received much less attention. The quantum hardware architecture we consider is inspired by the one proposed by Rigetti Computing Inc. [20]. The quantum circuits that solve the benchmark problems considered in this work are characterized by a high number of commuting quantum gates (i.e., gates among which no particular order is superimposed) that allow for great flexibility and parallelism in the solution, which makes the corresponding optimization problem very interesting and allows for an a significant depth minimization potential to limit circuit’s decoherence [21]. The rest of this section is devoted to: (i) describing the Graph Coloring problem and (ii) providing a formulation of the Quantum Circuit Compilation Problem (QCCP). A. Oddi et al. Fig. 2. Example of quantum circuit: (a) not NN-compliant; (b) NN-compliant through the insertion of a swap gate between qbits n1 and n2 just before the last gate, which exchanges the position of their respective qstates. It is implicitly supposed that at the beginning, the i-th qstate is resting on the i-th qubit. The Graph Coloring Problem Given a graph G(V, E) with n = |V | nodes and m = |E| edges, the objective is to maximize the number of edges in E that have end points with different colours, using for each node one among k available colours (k > 2), see Fig. 3a. Similarly to the MaxCut problem case, the quantum state preparation circuit within the QAOA solving framework relative to the Graph Coloring problem is divided in the following ordered phases: (i) initial state preparation (INIT block), (ii) phase-shift (P-S block), and (iii) mixing (MIX block) (see Fig. 3b). Specifically, the initial state preparation phase serves the purpose of initializing the quantum states to represent a feasible initial assignment, and its objective is to create a superposition with equal coefficients of all the k n possible colorings (WN state [4]), following the one-hot encoding [7]. According to the one-hot encoding, k qubits are required to represent the color of each vertex, where all but the i-th qubit (1 ≤ i ≤ k) are assigned the |0 value and the i-th qubit, which is assigned the |1 value, indicates whether that node is coloured with the colour i. As a consequence, in order to solve a Graph Coloring instance characterized by n nodes and k colors following the one-hot encoding, it is necessary to use quantum machines with at least nk qubits. More concretely, the feasible initial state assignment is obtained through the utilization of a series of controlled-G(p) rotations followed by an inverted CNOT (WN gates, see Fig. 3c). The analysis of the specific circuitry necessary to develop the WN quantum state is beyond the scope of this paper; the interested reader may refer to [4]. The P-S-phase is composed of a series of phase-shift (RZZ ) gates whose task is counting the edges colored with different colors. For this purpose, an RZZ gate (see Fig. 3c) is applied to all the (k 2 − k)/2 combinations of different colors associated to the end-points of any edge of the graph to be colored. All the phase-shift gates are commutative, so the compilation process does not need to worry about their order in the final circuit. Quantum Circuit Compilation for the Graph Coloring Problem Finally, the MIX phase serves the purpose of implementing the rotation of all the k colors on every node on the graph, thus potentially allowing any possible color assignment. The basic component of the MIX phase is the RXX RY Y (or M IXXY ) gate (see Fig. 3c), applied to each vertex of the graph to be colored, and for each pair of adjacent colors in the graph that represents the color rotation on each vertex. The placement of the M IXXY gates in the compiled circuit requires some attention, as these gates are only partially commutative (see the next section). Fig. 3. (a) An example of Graph Coloring instance with k = 3 colors. (b) Schema of the quantum state preparation circuit within the QAOA framework, composed of the initialization block, P-S- (phase-shift) block and MIX block. (c) Decomposition in terms of unary and binary basic gates of the quantum gates that respectively compose the three previous blocks. Figure 3a shows an example of the graph G that represents a Graph Coloring problem instance composed of 5 vertices, 8 edges and k = 3 colors. Figure 3b, presents the quantum state preparation schema of the QAOA framework, typically composed of the initial qubit allocation block (state initialization), the P-S (phase-shift) block and the MIX block. In the Graph Coloring problem case, each of the previous three blocks are composed of particular quantum gate aggregations, the WN , the RZZ (phase-shift), and the M IXXY gates respectively, shown in Fig. 3c. Generally, the P-S and the MIX blocks within the QAOA framework can be executed along multiple passes (p) in order to obtain more accurate results; in the context of this work, we consider quantum circuits composed of two passes (p = 2). Quantum Gate Compilation Problem Formally, the Quantum Circuit Compilation Problem (QCCP) is a tuple P = C0 , L0 , QM , where C0 is the input quantum circuit, representing the execution A. Oddi et al. of the Graph Coloring algorithm, L0 is the initial assignment of the i-th qstate qi to the i-th qubit ni , and QM is a representation of the quantum hardware as a multigraph. – The input quantum circuit is a tuple C0 = Q, VC0 , T C0 , where: (1) Q = {q1 , q2 , . . . , qN } is the set of qstates which, from a planning & scheduling perspective, represent the resources necessary for each gate’s execution (see for example [15], Chap. 15); (2) VC0 = WN ∪ P-S ∪ M IXXY ∪ {gstart , gend } represents the set of state initialization, phase-shift and mix gate operations that have to be scheduled. Note that all the previous gates are binary, in the sense that they require two qstates. Note also that gstart and gend are two fictitious reference gate operations requiring no qstates. The execution of every quantum gate requires the uninterrupted use of the involved qstates during its processing time, and each qstate qi can process at most one quantum gate at a time. (3) Finally, T C0 is a set of simple precedence constraints imposed on the WN , P-S, M IXXY and {gstart , gend } sets, such that: (i) each gate in the three sets WN , P-S, M IXXY occurs after gstart and before gend ; moreover, within the same pass: (ii) every P-S gate must follow any WN gate with which it shares a qstate; (iii) any M IXXY gate must follow any P-S gate with which it shares a qstate; (iv) all the P-S are totally commutative; (v) a partial ordering exists in the M IXXY set, as follows: the M IXXY is initially partitioned in two sets called M IXodd and M IXeven depending on the numbering of their initial qstate; all the gates mix ∈ M IXodd can commute as they have no qstate in common, and the same applies to all the gates mix ∈ M IXeven , while there exists a precedence imposed between a mix ∈ M IXodd and a mix ∈ M IXeven if and only if they share at least one qstate. Between two consecutive passes, no P-S gate that belongs to the i + 1-th pass can be executed before any M IXXY gate that belongs to the i-th pass if they share at least one qstate. – L0 is the initial assignment at the time origin t = 0 of qstates qi to qubits ni . – QM is a representation of the quantum hardware as an undirected multigraph QM = VN , EWN , Ep-s , Eswap , where VN = {n1 , n2 , . . . , nN } is the set of qubits (nodes), Ep-s , Eswap or EWN is a set of undirected edges (ni , nj ) representing the set of adjacent locations the qstates qi and qj of the gates p-s( qi , qj ), swap( qi , qj ) or WN (qi , qj ) can potentially be allocated to. Figure 1 shows an example of quantum hardware. A feasible solution is a tuple S = SWAP, T C, which extends the initial circuit C0 to a circuit CS = Q, VCS , T CS , such that VCS = SWAP ∪ WN ∪ P-S ∪ MIX ∪ {gstart , gend } and T CS = T C0 ∪ T C where: (i) SWAP is a set of additional swap( qi , qj ) gates added to guarantee the adjacency constraints for the set of WN , P-S and M IXXY gates, and (ii) T C is a set of additional simple precedence constraints such that: – for each qstate qi , a total order i is imposed among the set Qi of operations requiring qi , with Qi = {op ∈ WN ∪ P-S ∪ M IXXY ∪ SWAP : op requires qi }; Quantum Circuit Compilation for the Graph Coloring Problem Algorithm 1. Greedy Randomized Search Require: An problem P , stop criterion Sbest ← CompileCircuit(P ) while (stopping criterion not satisfied) do S ← CompileCircuit(P ) if (depth(S) < depth(Sbest )) then Sbest ← S end if end while return (Sbest ) – all the wN (qi , qj ), p-s( qi , qj ), mixXY (qi , qj ) and swap( qi , qj ) gate operations are allocated on adjacent qubits in QM ; – the graph VCS , T CS does not contain cycles. Given a solution S, a path between the two fictitious gates gstart and gend is a sequence of gates gstart , op1 , op2 , . . . , opk , gend , with opj ∈ WN ∪P-S∪M IXXY ∪ SW AP , such that gstart op1 , op1 op2 , . . . , opk gend ∈ T C0 ∪ T CS . The length of the path is the number of all the path’s gates and depth(S) is the length of the longest path from gstart to gend . An optimal solution S is a feasible solution characterized by the minimum depth. A Greedy Randomized Search Algorithm In this section we provide a detailed description of the Greedy Randomized Search (GRS) procedure used to compile the circuit introduced in previous Sect. 3. GRS has traditionally proved to be a very effective method for the resolution of complex optimization problems (such as the QCCP ), as it realizes a simple optimization process that quickly guides the search towards good solutions [10,16,19]). The GRS is particularly useful in cases where a high-quality solution is needed in a relatively short time. Among other applications, it is particularly suitable for constraint-based scheduling problems; since the QCCP can be reduced to a Planning and Scheduling (P&S) problem [17,21]. Algorithm 1 depicts the complete randomized search algorithm for generating a near-optimal solutions, which is designed to invoke the CompileCircuit() procedure until a stop criterion is satisfied. It essentially realizes an optimization cycle in which a new solution S is computed at each iteration through the CompileCircuit() algorithm, and its depth (depth(S)) is compared with the best depth found so far (depth( Sbest )) in the iterative process. In case depth(S) is smaller than depth( Sbest ), then the current solution S becomes the new best solution Sbest . The optimization process continues until a stopping condition (generally a max time limit) is met, where the GRS procedure returns the best solution found. As can be readily observed, the efficacy of the GRS mainly depends on the efficacy of the A. Oddi et al. Algorithm 2. Compile Circuit Require: A problem P = C0 , L0 , QM S ← InitSolution(P ); t←0 while not all the P-S and M IX operations are inserted in S do op ← SelectExecutableGate(P , S, t) if op = nil then S ← InsertGate(op, S, t) else t←t+1 end if end while return S CompileCircuit() procedure (described in the following section), which has the task of synthesizing increasingly better solutions. 4.1 Compile Circuit Algorithm Algorithm 2 is a randomized algorithm, it operates on macro-gates containing primitive gates that use two qstates at most. Indeed, Algorithm 2 is in itself a heuristically-based iterative algorithm that implements a constructive methodology where a solution is built from scratch using a randomized ranking heuristic. This heuristic returns a ranking among the gates that takes into account the “neighbouring cost” of all the gates that have yet to be inserted in the solution. At each iteration, a subset of gates that guarantee the fastest realization of the neighbouring conditions of all the remaining gates is generated and one gate is selected at random from this subset, for insertion in the current partial solution. Algorithm 2 takes as input a QCCP problem P = C0 , L0 , QM , and proceeds by chronologically inserting in the partial solution S one gate operation at a time until all the gates in the set WN ∪ P-S ∪ M IXXY are in S. Let op ∈ Qi be a general gate operation that involves qstate qi , we define a chain chi = {op ∈ Qi : op ∈ S} as the set of gates involving qi and currently present in the partial solution S, among which a total order is imposed. Let us also define last(chi ) as the last gate in the chain chi according to the imposed total order and nlast(chi ) as the QM node at which the last operation in the chain chi terminates its execution. Finally, we define the state of a partial solution as follows. Given a partial solution S, the state LS is the tuple LS = nlast(ch1 ), nlast(ch2 ), . . . , nlast(chN ) of QM locations (nodes) where each last chain operation last(chi ) terminates its execution. The first step of Algorithm 2 is the initialisation of the partial solution S; in particular, it sets the current state LS to the init value L0 by initialising the locations of every qstate qi (i.e., for every chain chi ) at the time origin1 t = 0. 1 It is implicitly supposed that at the beginning, the i-th qstate is initialized at the i-th location. Quantum Circuit Compilation for the Graph Coloring Problem The core of the algorithm is the function SelectExecutableGate(), which returns at each iteration either one of the gates in the set WN ∪ P-S ∪ M IXXY or a swap( qi , qj ) gate in the SWAP set necessary to guarantee NN-compliance as described in the previous Sect. 3. Indeed, it is a random algorithm targeted to minimize the solution depth, in particular its implementation is inspired to [3], such that the selection of a gate is based on two criteria: (i) the earliest start time gate selection (a value correlated to depth minimization); (ii) a metric to minimize the number of swaps. At each iteration, SelectExecutableGate(P , S, t) selects the next gate to be inserted in the solution by means of the InsertGate(op, S, t) method. In all time instants t where no quantum gate can be selected for insertion, the current time t is increased (t = t+1). In particular, SelectExecutableGate() resembles Algorithm 3 (see [2], page 8) with the following important difference: while the cited Algorithm 3 generates a set of eligible gates Ω and then selects a gate at random on the basis the proposed pheromone model (see [2]), the SelectExecutableGate() procedure chooses one gate at random following the same strategy proposed in [17], so that a set of equivalent gates Ω ∗ is extracted from Ω by identifying one gate op∗ associated with the minimal lexicographic heuristic value Δsum (op∗ ) (see [17] for further details on its definition) and by considering equivalent to op∗ all the gates op such that Δsum (op) = Δsum (op∗ ), Ω ∗ = {op : op ∈ Ω, Δsum (op) = Δsum (op∗ )}. A full description of the procedure SelectExecutableGate() is given in [2]. The randomly selected gate op ∈ Ω ∗ is inserted in the partial solution S at the earliest feasible time as the last operation of the chains relative to the qstates involved in op: last(chi ) ← op; subsequently, the state LS of the partial solution is updated accordingly. Algorithm 2 proceeds until a complete solution is Experimental Evaluation We have implemented and tested the proposed ideas leveraging the Qiskit opensource quantum-related framework [1]. Qiskit is a known open-source Software Development Kit for working with quantum computers at the level of pulses, circuits and application modules. It allows for the creation, modification, simulation, and optimization of quantum circuits on a set of both simulated and real quantum architectures, as well as allowing the possibility to test mapping algorithms on arbitrary quantum hardware topologies. Our contribution for this study focuses on the process of quantum circuit compilation with reference to a given hardware topology with the aim of minimizing the circuit’s depth. The proposed procedure was implemented in Python in order to allow its integration within Qiskit. The performance of the algorithm was tested on a benchmark set specifically created to represent the application of quantum computing to the Graph Coloring problem. 5.1 The benchmark set for the graph colouring circuits is obtained as an extension of part of the N 8 benchmark set for the Max-Cut problem [21]. Following the A. Oddi et al. Fig. 4. Comparison between GRS and SABRE approach in [21], the graph G for which the optimal coloring assignment needs to be found are randomly generated as Erd¨ os-R´enyi graphs [5]. In particular, 100 graphs are generated for the N = 8 qubit case. Half (50 problems) are generated by choosing N of N (N − 1)/2 edges over 7 qstates randomly located on the circuit of size 8 qubits (referred as ‘Utilization’ u = 90%). The other 50 problems are generated by choosing N edges over 8 qstates - referred as utilization u = 100%). For the graph colouring benchmark, we only consider the N 8 problems with utilization u = 100%, and such that the connected graph contains exactly 7 nodes, assigning three colours (k = 3) to each node of the graph, for a total of 22 graph instance problems. Hence, quantum processors with at least 21 qubits (7 nodes times 3 colours) are necessary for the execution of such instances (see Sect. 3.1). More specifically, we consider a Rigetti-inspired 21 qubit processor and set p = 2 (two PS-mixing passes). 5.2 The Python version of the proposed greedy randomized search (GRS ) algorithm compiles a QAOA circuit with the following choices: (i) a one-hot encoding to represent the graph-coloring problems [7], and (ii) a decomposition procedure for the QAOA blocks based on the identification of odd and even M IXXY gates [9,22], as explained in Sect. 3.2. Figure 4 compares the proposed GRS algorithm with the SABRE compiler available in Qiskit (SabreSwap), launched according to its three different Quantum Circuit Compilation for the Graph Coloring Problem heuristics (basic, lookahead, and decay). The algorithms are compared with respect to the depth of the compiled circuits (the circuit’s depth represents the longest path in the compiled circuit graph). For each algorithm, a CPU time limit of 10 seconds is imposed on each run. From the results in Fig. 4 it is clear that GRS outperforms SABRE in all the latter’s execution modes. One possible explanation for the superiority of GRS is its capability to better exploit the commutativity rules of the gates in the QAOAbased Graph Coloring quantum circuit instances. Indeed, our algorithm imposes no particular order in the selection of the WN , P-S, and M IXXY macro-gates as the solution is built, beyond the precedence constraints originally present in the input quantum circuit, contained in the T C0 set described in Sect. 3.2. As opposed to GRS, SABRE performs the SWAP addition process reasoning directly on the circuit expressed in terms of basic gates, and it is not capable of changing the order of such gates after the circuit is loaded. This study focused on quantum computing as an accelerator for optimization problem resolution. We have considered the compilation techniques for Noisy Intermediate-Scale Quantum (NISQ) devices [18]. In particular, we have explored the Quantum Alternating Operator Ansatz (QAOA) framework [9] for solving optimization problems and studied the quantum circuits for the Graph Coloring reference problem. We have proposed a greedy randomized search (GRS) algorithm targeted at optimizing the compilation of quantum circuits and defined an original benchmark set for testing compilation algorithms. On the basis of our empirical validation the proposed GRS algorithm outperforms other compilation algorithms available in the Qiskit framework. Acknowledgement. This work is the result of an Ariadna study, a joint collaborative research project with the Advanced Concepts Team (ACT) of the European Space Agency (ESA): Meta-Heuristic Algorithms for the Quantum Circuit Compilation Problem, ESA Contract No. 4000134995/21/NL/GLC/my. References 1. Qiskit: an open-source framework for quantum computing (2021). https://doi.org/ 10.5281/zenodo.2573505 2. Baioletti, M., Rasconi, R., Oddi, A.: A novel ant colony optimization strategy for the quantum circuit compilation problem. In: Zarges, C., Verel, S. (eds.) EvoCOP 2021. LNCS, vol. 12692, pp. 1–16. Springer, Cham (2021). https://doi.org/10.1007/ 978-3-030-72904-2 1 3. Chand, S., Singh, H.K., Ray, T., Ryan, M.: Rollout based heuristics for the quantum circuit compilation problem. In: 2019 IEEE Congress on Evolutionary Computation (CEC), pp. 974–981 (2019) 4. Cruz, D., et al.: Efficient quantum algorithms for GHZ and w states, and implementation on the IBM quantum computer. Adv. Quant. Technol. 2(5–6), 1900015 (2019) A. Oddi et al. 5. Erdos, P., Renyi, A.: On the evolution of random graphs. Publ. Math. Inst. Hungary. Acad. Sci. 5, 17–61 (1960) 6. Farhi, E., Goldstone, J., Gutmann, S.: A quantum approximate optimization algorithm. arXiv preprint arXiv:1411.4028 (2014) 7. Fuchs, F.G., Kolden, H.Ø., Aase, N.H., Sartor, G.: Efficient encoding of the weighted max $$k$$-cut on a quantum computer using QAOA. SN Comput. Sci. 2(2), 89 (2021). https://doi.org/10.1007/s42979-020-00437-z 8. Guerreschi, G.G., Park, J.: Gate scheduling for quantum algorithms. arXiv preprint arXiv:1708.00023 (2017) 9. Hadfield, S., Wang, Z., O’Gorman, B., Rieffel, E., Venturelli, D., Biswas, R.: From the quantum approximate optimization algorithm to a quantum alternating operator ansatz. Algorithms 12(2), 34 (2019) 10. Hart, J., Shogan, A.: Semi-greedy heuristics: an empirical study. Oper. Res. Lett. 6, 107–114 (1987) 11. Kole, A., Datta, K., Sengupta, I.: A heuristic for linear nearest neighbor realization of quantum circuits by swap gate insertion using n-gate lookahead. IEEE J. Emerg. Sel. Topics Circuits Syst. 6(1), 62–72 (2016). https://doi.org/10.1109/JETCAS. 2016.2528720 12. Kole, A., Datta, K., Sengupta, I.: A new heuristic for n-dimensional nearest neighbor realization of a quantum circuit. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 37(1), 182–192 (2018). https://doi.org/10.1109/TCAD.2017.2693284 13. Li, G., Ding, Y., Xie, Y.: Tackling the qubit mapping problem for NISQera quantum devices. CoRR abs/1809.02573 (2018). https://arxiv.org/1809.02573 arxiv.org/abs/1809.02573 14. Maslov, D., Falconer, S.M., Mosca, M.: Quantum circuit placement: optimizing qubit-to-qubit interactions through mapping quantum circuits into a physical experiment. In: Proceedings of the 44th Annual Design Automation Conference, DAC’07, pp. 962–965. ACM, New York, NY, USA (2007). https://doi.org/10.1145/ 1278480.1278717 15. Nau, D., Ghallab, M., Traverso, P.: Automated Planning: Theory & Practice. Morgan Kaufmann Publishers Inc., San Francisco (2004) 16. Oddi, A., Smith, S.: Stochastic procedures for generating feasible schedules. In: Proceedings 14th National Conference on AI (AAAI-97), pp. 308–314 (1997) 17. Oddi, A., Rasconi, R.: Greedy randomized search for scalable compilation of quantum circuits. In: van Hoeve, W.J. (ed.) Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pp. 446–461. Springer International Publishing, Cham (2018). https://doi.org/10.1007/978-3-319-93031-2 32 18. Preskill, J.: Quantum computing in the NISQ era and beyond. Quantum 2, 79 (2018). https://doi.org/10.22331/q-2018-08-06-79 19. Resende, M.G., Werneck, R.F.: A hybrid heuristic for the p-median problem. J. Heuristics 10(1), 59–88 (2004) 20. Sete, E.A., Zeng, W.J., Rigetti, C.T.: A functional architecture for scalable quantum computing. In: 2016 IEEE International Conference on Rebooting Computing (ICRC), pp. 1–6 (2016).https://doi.org/10.1109 /ICRC.2016.7738703 21. Venturelli, D., Do, M., Rieffel, E., Frank, J.: Temporal planning for compilation of quantum approximate optimization circuits. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 4440–4446 (2017). https://doi.org/10.24963/ijcai.2017/620 22. Wang, Z., Rubin, N.C., Dominy, J.M., Rieffel, E.G.: xy mixers: analytical and numerical results for the quantum alternating operator ansatz. Phys. Rev. A 101, 012320 (2020) Toward a Heterogeneous Multi-robot Framework for Priority-Based Sanitization of Railway Stations Riccardo Caccavale1 , Mirko Ermini2 , Eugenio Fedeli2 , Alberto Finzi1 , Vincenzo Lippiello1 , and Fabrizio Tavano1,2(B) 1 Universit` a degli studi di Napoli “Federico II”, via Claudio 21, 80125 Naples, Italy {riccardo.caccavale,alberto.finzi,vincenzo.lippiello}@unina.it 2 Rete Ferroviaria Italiana, Piazza della Croce Rossa 1, 00161 Rome, Italy {mi.ermini,e.fedeli}@rfi.it, [emailprotected] Abstract. We present a new framework for the prioritized multi-robot sanitization of railway stations based on Deep Reinforcement Learning. The proposed framework allows us to define teams of robots having different sanitizing strategies/capabilities, e.g., faster robots rapidly sanitizing small areas in cooperation with slower but long-range ones. Here, robot-specific policies are defined in order to accommodate the different capabilities of the single agents, while two global metrics are defined to assess the performance of the overall team. This capability of managing heterogeneous teams is an important requirement for the infrastructure manager Rete Ferroviaria Italiana S.p.A., which plans to verify to what extent different technologies or different strategies can be combined to reduce costs or increase cleaning efficiency. We tested our framework considering real data collected by the WiFi network of the main Italian railway station, Roma Termini, comparing its results with a similar Deep Reinforcement Learning system where homogeneous robots are employed. Keywords: Heterogeneous multi-robot system learning · Priority-based sanitization · Deep reinforcement The work illustrated in this paper is motivated by a request from the Italian railway infrastructure manager Rete Ferroviaria Italiana concerned about the spread of Covid-19 disease in the common areas of railway stations. A recent [15] study shows that in train stations there is a high probability of being infected during the pandemic: passengers gathered in the corridors and platforms of stations, eating at restaurants, getting on trains, facilitates the transmission of diseases. The pandemic caused by the SARS-CoV-2 has spawned a crisis that has affected the railway sector in a significant way [31], for example, by inducing people to prefer c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Dovier et al. (Eds.): AIxIA 2022, LNAI 13796, pp. 387–401, 2023. https://doi.org/10.1007/978-3-031-27181-6_27 R. Caccavale et al. cars instead of trains [4]. It is then strategic for the infrastructure managers (such as the Italian Rete Ferroviaria Italiana) to deploy adequate and modern tools to prevent future contagion inside railway stations [26]. In the last two decades, we have seen the spreading in the world of diseases such as SARS-CoV, MERS-CoV, or COVID-19 with different waves [9,33]. In this regard, disinfectant robot technologies are proven very useful in fighting global pandemics [32] by reducing the number of people involved in the cleaning process and by optimizing sterilization. In this work, we propose a multi-robot sanitizing framework specific for teams of robots capable of cleaning human-populated environments [20] as railway stations. Our main goal is to provide a strategy in which a team of autonomous and heterogeneous indoor cleaning robots executes the sanitizing activities by exploiting both the specific capabilities offered by the different robots and the information about human presence. The latter knowledge is stored in a shared heatmap representing the most populated areas retrieved from the station’ WIFI infrastructure. The proposed work is an extended version of our previous multi-robot sanitizing framework [3] where heterogeneous agents are considered. Specifically, we extended the framework by allowing different robots with different features - such as cleaning range, the shape of the cleaning area, or speed - to cooperate during the execution of the shared cleaning task. Analogously to [3], we propose a framework based on a decentralized deep reinforcement learning, where each robot of the team performs its own learning process. Our claim is that the robot-specific policies produced by this approach permits cooperation between the heterogeneous robots without performance degradation of the overall team. The need to consider heterogeneous robots is relevant for the infrastructure manager. Different typologies of robots with different cleaning strategies can be deployed, for instance, by integrating several small (low-cost) robots, having limited sanitizing capabilities, in alternative to bigger but more performing ones. The possibility to increase the number of robots in a team with different or less expensive models without reducing the cleaning performance of the overall system may be convenient in terms of costs but also in terms of hardware and software maintenance [10], especially over prolonged usage periods [24]. In the literature, frameworks that simulate heterogeneous teams of robots are often considered and deployed in several different contexts [28]. For instance, in the pursuit-evasion class of games, robots with different capabilities cooperate to catch moving targets in a shared environment and their pursuit strategies must be adapted with respect to the behavior of the opponent [30,34]. This case is similar to our domain, where clusters of people appear/disappear/move around the station and the robots’ cleaning strategy should be adapted accordingly. The benefit of the coordinated heterogeneous robots is also emphasized in [35] where different robots (aerial and ground vehicles) are deployed. In the cleaning context, several multi-robot frameworks have been proposed based on Coverage Path Planning (CPP) [11–14,17,21–23]. In these works, each robot is assigned to a specific area of the environment executing fixed shaped paths (spirals, rectangles, etc.) to cover them. These methods are effective in providing a continuous cleaning service which maximizes the coverage and minimizes the Heterogeneous Multi-robot Framework for Priority-Based Sanitization idleness of the agents, but priority-based cleaning is hardly considered. Priority issues are instead considered in Persistent CPP [16,19,25,29], where robots’ paths have to be adjusted in order to ensure that static prioritized locations are visited within the pre-specified time period. These approaches often consider static priorities and a graph-based representation of the environments with only a limited number of nodes. Deep Q-Learning (DQN) methods for sanitization are considered in [11,22], but in a single robot framework. In contrast, our approach is to dynamically update the behavior of a team of heterogeneous robots by considering the continuous evolution about the positions of the people and the diffusion of the contaminants in the map. Moreover, we are interested in finding a multi-robot sanitization strategy considering heterogeneous teams and high resolution priorities in very large railway stations. For this reason, we proposed a solution based on Multi-Agent Reinforcement Learning [3] capable of adapting the cleaning strategy to the continuous changes in a very large dynamic environment. In this work, our main contribution is the design of a heterogeneous framework where multiple mobile robots of different characteristics and typologies learn to cooperate during the execution of cleaning tasks. To evaluate the approach, we consider a very large crowded environment from a real railway station, exploiting WiFi information about the distribution of people in order to assess the performance of different heterogeneous teams of robots. We also propose an assessment of a heterogeneous robotic team in a real case study, using a one-day data recording of the people’s movements inside the Roma Termini station, retrieved from the Meraki Cisco System WiFi Network. In this context, the empirical results collected show that the performance of the heterogeneous team is comparable to that of the homogeneous team working under the same conditions. The rest of the paper is structured as follows. In Sect. 2, we describe the architecture of the proposed framework along with the main components and the overall learning process. In Sect. 3, we focus on the experiments about the convergence and performance of the proposed heterogeneous team in comparison with the homogeneous one. Finally, Sect. 4 concludes the paper and discusses future works. The Architecture The multi-robot DQN approach proposed in this work is an evolution of the decentralized client-server architecture presented in [3], where it is now possible to specify different characteristics of each single robot. In particular, the team is composed of k robots with different capabilities, each endowed with a robotspecific policy. The robots interact with a central system (server) that maintains/updates a shared representation of the station in the form of a heatmap whose hot spots are areas to be sanitized. Specifically, we represent the environment as a 2-dimensional gridmap whose grids outline 1 m2 areas of the station. Grids are then associated with a priority level (the heatmap), which depends on the distribution of people in the station and indicates how risky the area is and how urgently the robots should sterilize it. The goal of the agents is to R. Caccavale et al. Fig. 1. Graphical representation of the framework including multiple agents (left), each endowed with agent-specific experience replay buffers and networks, along with a single server (right) that exploits WiFi statistics to provide a heatmap of priorities (red to yellow spots) for the agents. (Color figure online) suitably navigate the gridmap cleaning the traversed grids in the process, in so minimizing the risky areas and reducing the level of priority on the overall map. We formalize our domain as a distributed multi-robot Deep Q-Learning problem [1] where a set of agent-specific policies (π1 , . . . , πk ) should be found for the k robots in order to guide each agent toward the cleaning targets. More formally, we define M as the gridmap of the station, X as the set of all free-obstacle grids in the map, S as the set of possible heatmaps (i.e., priority distributions) on the map M and A as the set of actions available for a single agent, where ai ∈ A drives the agent i from the current grid to an adjacent one. The aim is to find, for each robot i, a suitable policy πi : S × X → A associating the agent positions xi ∈ X and the distributions of priority in the map s ∈ S to robotspecific actions ai ∈ A, driving the agent from the current grid to the next grid to be sanitized. A representation of the overall architecture is depicted in Fig. 1. The framework includes a team of different typologies of mobile cleaning robots. Every typology is characterized by a different dimension of the area that agents sanitize during their movements in the environment. Each robot communicates with a single shared WiFi server that is responsible for building a heatmap of the Roma Termini railway station. The server updates the heatmap considering the information about the agents’ cleaning activities and the (anonymized) data on the location of people, which are used to define the risky areas to be sterilized. The role of each agent is to elaborate the heatmap by means of an agent-specific DQN and to update the local strategy πi considering their specific capabilities, the environmental settings and the priorities in the map. Heterogeneous Multi-robot Framework for Priority-Based Sanitization Fig. 2. Planimetry of the Roma Termini shared by Rete Ferroviaria Italiana (a) and the selected occupancy gridmap (b). (Color figure online) Heatmap Definition and Update The gridmap representing the environment is built from the real planimetry of the Roma Termini railway station, which has been provided to us by Italian Infrastructure Manager Rete Ferroviaria Italiana. The area of the station selected for our experiments is depicted in Fig. 2 (yellow box). We defined this area because, on the one hand, it represents the indoor part of the station, where open air and wind cannot attenuate contamination and, on the other hand, it includes the areas of the station where it is more likely to have crowed areas. Selected sectors include: access gates for the railway lines, commercial activities like shops, restaurants, ticket offices, waiting rooms, and luggage storage. Starting from this gridmap, we design a heatmap where populated areas are associated with colored spots (from red to yellow) representing the cleaning priority that the heterogeneous team should take into account during the sanitizing process. More specifically, the resulting heatmap has a dimension of 100 × 172 pixels and a resolution of 1 m2 per pixel. During every step of the execution, each robot of the team decides the new position to reach in order to start the cleaning action depending on its own specific capability. After a movement, each robot cleans at a fixed cleaning rate (i.e., 4 pixels per step) a cleaning area of fixed dimensions and shape. Each robot in the team has its assigned dimension and shape of the cleaning area. This area-cleaning process is simulated by holding the robot in the current pose for a certain number of steps which depends by its cleaning rate. In our framework, the WiFi Server constantly communicates with all the members of the team to update of the shared heatmap. Specifically, the server updates the heatmap by removing the cleaning priorities of areas sanitized by the robots, while new priorities are also added as colored spots at the positions of newly detected people. Furthermore, at every step of the execution, the server updates the priorities on the heatmap by simulating the natural spreading and the attenuation of contamination over time. This effect is computed from the position of people (clusters) by modeling the possible spreading of viruses or bacteria using a Gaussian model of dispersion [7]. Specifically, we exploit the R. Caccavale et al. periodic convolution of a Gaussian filter N (μ, σ 2 ) every ψ steps, where μ, σ 2 and ψ are suitable parameters that can be regulated depending on the meters/pixels ratio, the timestep, and the considered typology of spreading (in this work we assume a setting inspired to the aerial diffusion of the Covid-19 [27]). In our case, we set μ and σ according with the spreading parameters proposed in [2,8]. An exemplification of the evolution of a heatmap configuration is provided in Fig. 3. The convolution process acts at every step by incrementally reducing the magnitude of the elements of the heatmap matrix, while distributing the priority on a wider area. Notice that in Fig. 3 there are several black areas (0 priority) that are regions of space associated with the static obstacles of the environment (shops, rooms and walls inside the station). These areas are assumed to be always clean, hence unattractive for the robots. When an agent moves with an action ai ∈ A, it sends the new position to the WiFi Server. The region of the heatmap in the neighborhood of the newly reached position, with the cleaning area assigned to the agent, is cleaned by the server, which then sets to 0 the associated priority level when updating the heatmap. 2.2 Multi-agent Experience Replay and the Learning Process In our framework, we propose a multi-agent variation of the experience replay method proposed in [1,3,18]. In particular, our training scheme exploits a Distributed Training Decentralized Execution (DTDE) approach [6], where each robot is independent during both the execution phase and the training phase, while its own individual policy is updated by considering only its own experience, without explicit information exchange between robots. In this framework, our idea is to exploit this DTDE approach to allow robots of different types to cooperate in a heterogeneous team. Robot-specific capabilities are: the travelling speed of the robot in the map (denoted by the movement length in Table 1), the shape and the dimensions of the areas that the robots are able to clean after each movement, and the time that the robot takes to clean the reached area (denoted by the cleaning speed in Table 1). In order to ensure that every robot learns by its own experience, each of the k agents is endowed with a specific replay buffer, along with specific target and main DQNs, which are synchronously updated with respect to the position of the agent and to the shared environment provided by the server (see Fig. 1). The target and the main networks are two identical convolutional neural-network composed of the following layers: the first layer is a 2D convolutional layer with 32 filters 8 × 8, strides (4, 4) and ReLU activation; the second is a 2D convolutional layer with 64 filters 4 × 4, strides (2, 2) and ReLU activation; the third is a 2D convolutional layer with 64 filters 3 × 3, strides (1, 1) and ReLU activation; the fourth is a flatten layer; the fifth layer is a dense layer of 512 neurons still with ReLU activation; finally, the output layer is a dense layer composed of 8 neurons with linear activation. The input of the neural network is an image with 2 channels of dimensions 100 × 172 pixels. In the first channel there is the heatmap, represented as matrix where each element is a real number in the interval [0, 1] where 1 is the maximum priority and 0 means that no cleaning is needed. This matrix can be displayed as a color-coded Heterogeneous Multi-robot Framework for Priority-Based Sanitization Table 1. Parameters of the framework Actor Exp. replay Discount factor γ Maximum Minimum Decay Replay buffer size Target network update Main network update Batch size 0.99 1.0 0.1 9 · 10−7 104 104 steps 4 steps 32 WiFi server Refresh period 60 steps Cluster of people 1 px Long-range robot Cleaning area Cleaning speed Movement length Cleaning shape 25 px 4 px/step 2 px Square Mid-range robot Cleaning area Cleaning speed Movement length Cleaning shape 17 px 4 px/step 2 px Hexagon Short-range robot Cleaning area Cleaning speed Movement length Cleaning shape 9 px 4 px/step 1 px Square Diameter μ σ 5 px 0 0.9 100 × 172 px image (see map in Fig. 3), where black pixels are associated with 0 priority areas, while colors from red to yellow are for increasingly higher priorities. The second channel x is a binary m × n matrix (100 × 172 pixels in our case) representing the position and size of the cleaning area of the robot in the heatmap, which is 1 for the portions of the environment that are currently in the range of the robot cleaning effect, and 0 otherwise. In order to update the networks, we apply the Adam optimizer with learning rate α = 0.00025. A local reward function ri is defined, to permit each agent to evaluate its performance during the cleaning activity in the training process. The local reward function ri is designed to give a benefit to the agents that reach prioritized areas of the environment (hot points), while there is a penalty if a robot meets a fixed obstacle or an already R. Caccavale et al. Fig. 3. Generation of the heatmap from Meraki data. From left to right, the starting georeferenced Meraki data (a) are converted into a robot-frame heatmap (b), which is then updated by the server through Gaussian convolution after 100 timesteps(c). visited area (cold point) in the heatmap. In this direction, we firstly introduce a cumulative priority function cpi that summarizes the importance of a cleaned area, cpi = si (j, l)xi (j, l) (1) represented in Eq. 1 as the sum of the element-wise priorities from matrix si in the area sterilized by the agent i (where xi (j, l) = 1). Such value is then exploited to define the reward ri for the agent i as follows: cpi if cpi > 0; ri = (2) penalty otherwise. Specifically, when an agent i sanitizes a priority area, the reward is equal to the cumulative value cpi ; otherwise, if no priority is associated with the cleaned area (i.e., cpi = 0) a negative reward penalty < 0 is earned [5] (we empirically set penalty = −2 for our case studies). This way, agents receive a reward that is proportional to the importance of the sanitized area, while routes toward zeropriority areas, such as obstacles or clean regions, are discouraged. Notice that in this framework, when the action of an agent leads to an obstacle (collision), no motion is performed. This behavior penalizes the agent (no further cleaning is performed), thus producing an indirect drive toward collision-free paths. We k define also an overall reward function r = i ri to summarize and evaluate the team performance as illustrated in Fig. 4. In this section, we show how the proposed heterogeneous multi-robot framework can be deployed in a realistic environment. As illustrated in the previous sections, we consider Roma Termini station (the largest and most populated Italian railway station) as the environment for our experiments. The station is endowed with several access points managed through a Meraki platform of Cisco System WiFi Network that allows remote operators to monitor the information about Heterogeneous Multi-robot Framework for Priority-Based Sanitization the presence and the positions of mobile devices (smartphones) all over the station. This information is exploited by the system (WiFi Server) to estimate the distribution of people and then to update the heatmap shared by the heterogeneous team. An example of the distribution of people retrieved from the Meraki system can be found in Fig. 3(a). We consider the WiFi Server to receive an updated distribution of people every 1 h. The information from the Meraki is then converted into a heatmap for the robots by associating each location with a priority value proportional to the density of people. Since the information from the Meraki are georeferenced, the retrieved value are finally rotated, translated and scaled in order to match the reference frame of the robots (see Fig. 3(b)). Thanks to the collaboration with Rete Ferroviaria Italiana, we obtained an entire day of recording of the Meraki system (2 September 2021) to be exploited for our experiments. In order to assess the performance of the proposed heterogeneous framework, we compare its performance with respect to a similar framework in which a homogeneous team is deployed. Specifically, we assume two teams, both composed of 4 robots: the first team (homogeneous) is composed of 4 mid-range sanitizing robots, while the second team (heterogeneous) is composed of 2 types of agents, namely, 2 short-range robots and 2 long-range ones. The parameters (ranges and velocities) for these 3 categories are shown in Table 1. In our tests, we consider for each robot the same cleaning speed. The two teams are associated with equal values of the total sum of the cleaning areas. The movement length of each robot, after the conclusion of the sanitization of its cleaning area, is equal to the ray of its own cleaning area. In the first case study we have compare the convergence of the two teams during the training phase by ran
{"url":"https://saynotocaps.org/article/aixia-2022-advances-in-artificial-intelligence-xxist-international-conference-of-the-italian-association-for-artificial-intelligence-aixia-2022-udine-italy-november-28-december-2-2022-proceedings-3031271807-9783031271809-ebin-pub","timestamp":"2024-11-02T05:17:02Z","content_type":"text/html","content_length":"1049411","record_id":"<urn:uuid:9cdae0f6-29f5-40c0-aa12-397313988eef>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00872.warc.gz"}
We had the lesson with Teacher Happy yesterday for the first time. She was fun, energetic and kind. Thank you for the lesson. Thank you so much for your wonderful lesson. He learned a lot and it was so much fun. Thank you so much for your wonderful class today.He really enjoyed and it was a very beneficialtime for him. Thank you for always kindly giving me guidance. He like Happyteacher, so I look forward to seeing you again. I'm sorry for always being playful. Thank you for your kind guidance. Thank you for the fun lesson.I will do my best to review at home, too! I'm glad to hear that you speak slowly and easily. Thank you very much for the great lesson today! His speaking skill is much better ! Thank you for always teaching him. It was clear and easy to understand. Your lessons are always interesting and he enjoys them.Thank you for teaching him so kindly. Your lessons are always interesting and he enjoys them.Thank you so much for your wonderful lesson. It was easy to understand. Thank you for teaching him.He learned a few new word and he had a good time with you. Thank you for your splendid lesson. He had a good time with you and learned a lot of new words. Thank you so much for your wonderful lesson. He likes your fun lesson. Thank you so much for your splendid lesson. He had good time with you and learned a lot today. Thank you very much for today. He likes your wonderful lesson every time. Thank you very much for teaching him. He likes your pleasant lesson. Thank you for your splendid lesson. He had a good time. Thank you for teaching him so kindly. I`m sorry I got late for your message.Thank you for telling me kindly.Aiso thank you. Thank you so much for your splendid class. My daughter saya it was fun.He says that the jump is good. Is it okay with such a condition? Thank you for making slowly when I cannot follow you:) Thank you for your wonderful class. I'm sorry that my PC is out of order. Thank you for fun lesson this year. *<|:-) See you next year! Teacher Happy! Thank you always for the fun times.I'm looking forward to seeing you next time! Thank you very much! I enjoyed your class. Thank you very much. Thank you for two lessons today.I was happy that I could talk about my cats.I enjoyed my second lesson.I would like to talk with teacher more variously.See you next time! 今日は2回のレッスンありがと Thank you for a very fun time.Teacher told me slowly and it was easy to understand.I'm looking forward to the next. とても楽しい時間をありがとうございました。ゆっくり話してくれて、分かりやすかったで Thank you very much for your lesson. Thank you so much for your morning class today! I enjoyed your class very much. Thank you so much. Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much. I'm getting used to taking SMLE lessons. Thank you for keeping on indicating my small errors with lots of patient. Thank you very much! Thank you very much for your lesson. Whenever Kai has your lesson, he looks so happy (like your name!!) . We are looking forward to seeing next lesson. Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! See you! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much for your lesson. My son and I enjoyed your class. I hope to see you again. Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very mcuh! Thank you very much! It was a fun time. See you soon. Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Thank you very much! Very enjoyable lesson! Thanks and see you again. THank you very much for your lesson! I enjoy conversation very much. I am looking forward to seeing you very soon! hi!happy!Thank you for the lesson this morning. I searched on the internet and I found the article that I wanted to say! We can talk about it in the next lesson. See ya! Newborn Defenses We know that a newborn’s immune system is not nearly as effective as an adult’s or even an older child’s, and that it takes many months before a newborn can fight off infection as well as someone whose immune system is fully matured. Nonetheless, you may be pleasantly reassured to know that newborns are much better protected against (or immune to) potential illnesses and diseases than you might otherwise think. This is because during pregnancy, disease-fighting antibodies made in the mother’s immune system are able to make their way across the placenta and into her baby’s body. Fortunately, these antibodies stick around for several months and are able to give newborns an added level of protection from many routine illnesses during this important time when they are not as able to effectively make their own antibodies. However, all good things must come to an end, and infants gradually get less and less benefit from their mothers’ antibodies—that is, unless they are breastfed. Thank you for your lesson.Today's lesson is so short but it's very fun. Thank you for your lesson. It was very interesting. Thank you for teaching me!! Thank you for the lesson, Teacher Happy. I really enjoyed sharing some stories with you. Talk soon. Thank you so much for initiating the conversation, I am sorry I did not prepare the topics that I can initiate because I have been busy, hope to talk to you soon :) I like your lessons very much! I enjoyed a lot! Thank you! See you again soon! Thank you so much for listening to my personal story, hope to talk to you soon :) It was fun to take your lesson. Thank you very much. SMLEを受講しました。 とても聞きやすい声で分かりやすかったです。レッスンは簡単な単語をしようしましたが、会話はなれなくて難しかったです。 レッスン中に初めと終わりで自分の話すスピードが変わるのが感じ Thank you for the nice lesson. See you next time! Thank you so much for cheerful conversation everytime! Thank you Happy teacher. "I will go shopping." "stapmather" etc, I don't know them. I can not speak English a lot. I will try to study English. You are very kind. I was a fun lesson! Thank you very much for your interesting lesson. I could enjoy a lot. I will buy a baby bouncer. Thank you so much for the lesson last night. You always try to teach me lively so I can understand very well. See you next timesoon! Thank you very much! I enjoyed your class.Hope to see you soon~ I improve my English owing to your class very much! Thanks a lot. Thank you for your lesson! Thank you very much for you class today. I hope to talk to you again. Thank you for a lot :) I really enjoyed. I hope see you soon. Thank you for talking to me. Thank you for your kind teaching. Your lovely smile has always encouraged me in leaning. Thank you for correct my mistakes~ Thank you for your enjoyable lesson today. My Enlish was sometimes wrong, but you tried to understand what I wanted to say. Many thanks. Thank you very much. I really enjoyed talking to you, Teacher Happy. Your broad smile always encourages me. Thank you. Thank you very much. dispute = to complain because something is not correct. OK Hi Happy, I really enjoyed your lesson today. Thank you. I enjoyed taliking to you,thank you ! Thank you for teaching me! I'm very enjoyed your class. I want to take your class everyday! Do you always send summary? It's very helpful. Thank you so much. I hope that you will be able to enjoy clothes again as soon as possible. Thank you so much(^^♪ Yhank you for your time.I enjoyed your lesson.See you again! Thank you for your lesson! Thank you for teaching me. Until next time. Thank you for your lesson.I enjoyed a lot! Thank you for teaching me.I enjoyed your class. Thank you so much for showing me your cute baby today♪ And I enjoyed our talks again. In my case, I had a high quantity of breast milk. So I had a severe pain and I pumped milk often. But it is said that it's good that mother pump milk at intervals of 2 or 3 days. I wish your success of quitting breast-feeding. (^^♪ Thank you for teaching me.I'm sorry that I couldn't make a sentences. Thank you for your class perseveringly. I repeat a sentence of the class. I had a great time. Thank you,Happy. Thank you for your lesson. I always enjoy talk with you. Thank you for your time.See you next lesson. Thank you,Teacher Happy. I always enjoy very much! I was so happy to see you again. I hope that you will be able to sleep well in the near future and your son stay healthy.Have a good day(^^♪ Thank you for teaching me.See you next lesson! You taught me very kindly. Thank you for nice lesson^^ I like your class. Thank you. Thank you very much! Thank you for teaching me.I learned many things in your class.See you lagain! Thanks for great lesson^^I enjoyed it!! Thank you for your great lesson.I will practice more. Thank you very much for your lesson! Thank you very much for your enjoyable lesson.I iove your lesson! Thank you for your time. Thank you for your lesson. I enjoyed. see you next time. Thank you for your wonderful lesson.I'm looking forward to the next lesson. Have a nice weekend! Thank you very much for your courteous teaching. I was very very "Happy" to talk with you after a long interval. I'm looking forward to talking with you soon. Please say hello to Cedie. By peto Thank you for your great lesson.See you again! Thank you teaching me. thank you very much for teaching. I am look forward to next time practice. Thank you teaching me.See you next lesson. Thank you for your great lesson.I'm sorry,I couldn't make own sentences. See you next time. Thank you so much! Thank you very much! You seemed to have additional materials just in case for earlier completion. Thank you for your time. Your lesson is always informative.See you again! Thank you so much! Thank you very much! Thank you for teaching me. Thank you always, Happy. I like your lessons very much. Hope to see you again soon. Thank you so much! My daughter had a fun lesson time. We hope to see you soon. Thanks a lot Happy! I enjoyed myself at the lesson with you. Please enjoy the rest of the weekend. See you again. I've really enjoyed our talks again! I hope that my husband won't snore tonight. Thank you so much(^^♪ Thank you so uch for the lesson last night. As I sent the message after the lesson, I felt very sorry for spending tough time with me... Thanks to your kindness and amazing patience, I've thought I have to practice more. Thank you again! Great instructor!!! I was really satisfied with your lesson!! See you soon.
{"url":"https://www.key-eye.net/tch_info?id=438","timestamp":"2024-11-03T02:31:14Z","content_type":"text/html","content_length":"73353","record_id":"<urn:uuid:bebb07d6-0d95-4a87-bc0e-dd917b891e18>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00090.warc.gz"}
Validating Parentheses in Python: A Step-by-Step Guide - llego.dev Properly matching and validating parentheses is a common challenge faced by programmers. Parentheses are used in many contexts across Python, from mathematical expressions and nested function calls to indicating tuples and dictionaries. Ensuring parentheses are balanced and valid is an important coding skill that comes up frequently in technical interviews. In this comprehensive guide, we will examine step-by-step techniques for evaluating whether parentheses within a given string are valid using Python. Table of Contents Open Table of Contents Parentheses are a form of bracket that enclose or bound text. The opening parenthesis ’(’ and closing parenthesis ’)’ must come in a correctly nested pair for them to be considered valid. For example, ’()’ and ’({[]})’ are valid, while ’(’ is invalid as it lacks a closing parenthesis. Properly validating parentheses helps avoid bugs and errors down the line in your code. Being able to check if a string of parentheses is valid is a common technical interview question and programming exercise. Mastering parentheses validation demonstrates core programming competencies like strings, stacks, iterators, and algorithms. In Python, there are a few approaches to validate parentheses: 1. Using a Stack Data Structure 2. Counting Opening and Closing Parentheses 3. Regular Expressions We will explore Python code examples for each method, discussing the key concepts and techniques step-by-step. Follow along to improve your parentheses matching skills! To understand this guide, you should have: • Basic Python programming knowledge • Familiarity with strings, lists, stacks, and other built-in data structures • Understanding of iterators and generators • Ability to write functions, loops, conditional logic The examples are written for Python 3. Let’s get started! Approach 1: Using a Stack The stack data structure can be used to validate parentheses by pushing each opening parenthesis onto the stack, then popping for each closing parenthesis encountered. If the parentheses are valid, the stack will end up empty. Here is an example function implementing the stack approach: def is_valid_parentheses(text): stack = [] for char in text: if char == '(': if char == ')': if not stack: return False return not stack Breaking this down step-by-step: 1. Declare an empty stack to hold opening parentheses. Python lists can implement a stack using append() and pop(). 2. Iterate through each character in the input text string. 3. If the character is an opening parenthesis ’(’, push it onto the stack. This represents seeing an opening symbol. 4. If the character is a closing parenthesis ’)’, pop the stack if it’s not empty. If it is empty, that means we’ve seen a closing symbol without any prior unmatched opening symbols, indicating invalid parentheses. 5. After fully iterating the string, if the stack is not empty, some parentheses were not closed. Return False. 6. If the stack is empty after checking all parentheses, they must be valid. Return True. The stack data structure elegantly handles properly nested matching. Each opening symbol pushes to the stack, and closing symbols pop for every corresponding opener, eventually emptying the stack if parentheses are valid. Approach 2: Counting Parentheses An alternative method is counting total opening and closing parentheses, comparing the totals to determine if they are balanced. Here is example code using the counting approach: def is_valid_parentheses(text): open_pars = 0 close_pars = 0 for char in text: if char == '(': open_pars += 1 if char == ')': close_pars += 1 return open_pars == close_pars The steps are: 1. Initialize counters for opening and closing parentheses, starting at 0. 2. Iterate through the string, incrementing opening counter on ’(’ and closing counter on ’)‘. 3. After fully iterating, if open and close totals match, parentheses are valid. 4. Return True if totals match, False if they don’t match. This turns the problem into simple arithmetic. We expect opening and closing totals to be equal if parentheses are well-formed. Counting scales well for large inputs. We only need two O(1) counter increments versus O(N) stack operations. Comparing Stack and Counting Approaches The stack and counting solutions have tradeoffs to consider: • Stack handles matching and nesting, but requires more operations and space. • Counting is simpler and faster, but does not track matching or nesting. Counting will fail on some invalid cases that stacks handle correctly: text1 = '((()' text2 = '())(' is_valid_parentheses(text1) # Stack=False, Counting=True is_valid_parentheses(text2) # Stack=False, Counting=True If nesting validity is required, the stack approach is preferred. Counting is ideal when simple well-formedness is sufficient. Approach 3: Regular Expressions Regular expressions can validate parentheses through pattern matching rather than iterating the string directly. Here is an example regex solution: import re def is_valid_parentheses(text): pattern = r"\((?:[^\(\)]*|(?R))*\)" if re.fullmatch(pattern, text): return True return False Breaking this regular expression down: • \( matches an opening parenthesis. • (?: opens a non-capturing group. • [^\(\)]* matches any characters that are not parentheses. • | means “or”. • (?R) recursively matches the entire pattern again to support nested parentheses. • )* allows repeating the group 0 or more times. • \) matches a closing parenthesis. The regex matches valid paretheses combinations through recursion and negation. re.fullmatch() returns True if the entire string matches the regex, validating the parentheses. Regular expressions are a powerful and flexible way to validate structured text like parentheses. The regex solution handles nested parentheses and accounts for characters between them. Benchmarking Performance To compare performance, let’s benchmark each solution on longer input strings: import time import re # Stack solution def is_valid_parentheses_stack(text): # Counting solution def is_valid_parentheses_count(text): # Regex solution def is_valid_parentheses_regex(text): input_text = ")(" * 1000 t1 = time.time() t2 = time.time() print("Stack:", t2 - t1) t1 = time.time() t2 = time.time() print("Count:", t2 - t1) t1 = time.time() t2 = time.time() print("Regex:", t2 - t1) Stack: 0.0012021 Count: 0.000144 Regex: 0.0023651 Counting is the fastest, with regex being slowest due to the overhead of pattern compilation and recursive matching. Stack provides a balance of correctness and performance for general use. Counting is ideal for cases where only well-formedness is needed and nesting validity is not required. Validating parentheses is an essential programming skill and common interview topic. This guide examined several Python techniques using stacks, counters, and regular expressions. Key takeaways: • Stacks naturally model opening/closing symbol matching. • Counting scales well though does not handle nesting. • Regular expressions match valid parentheses through patterns. • Counting is fastest while regex is slowest. Practice parentheses validation across diverse test cases. Pay attention to edge cases and nesting behavior. Mastering parentheses validation will boost your confidence in technical interviews and programming in Python. Look for opportunities to practice these algorithms and data structures.
{"url":"https://llego.dev/posts/validating-parentheses-python-step-by-step-guide/","timestamp":"2024-11-15T03:11:25Z","content_type":"text/html","content_length":"58528","record_id":"<urn:uuid:a4e3b731-490b-4b9a-8c6c-89e98b1a0418>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00784.warc.gz"}
Oxidation, reduction and semi-classical limit for quantum matrix geometries Matrix configurations define noncommutative spaces endowed with extra structure including a generalized Laplace operator, and hence a metric structure. Made dynamical via matrix models, they describe rich physical systems including noncommutative gauge theory and emergent gravity. Refining the construction in [25], we construct a semi-classical limit through an immersed submanifold of complex projective space based on quasi-coherent states. We observe the phenomenon of oxidation, where the resulting semi-classical space acquires spurious extra dimensions. We propose to remove this artifact by passing to a leaf of a carefully chosen foliation, which allows to extract the geometrical content of the noncommutative spaces. This is demonstrated numerically via multiple examples. Austrian Fields of Science 2012 • 103012 High energy physics • 103028 Theory of relativity • 103019 Mathematical physics • Fuzzy branes • Matrix models • Oxidation and reduction • Quantization • Quantum geometry • 15/04/19 → 14/04/23 Project: Research funding
{"url":"https://ucrisportal.univie.ac.at/en/publications/oxidation-reduction-and-semi-classical-limit-for-quantum-matrix-g","timestamp":"2024-11-05T22:00:36Z","content_type":"text/html","content_length":"51393","record_id":"<urn:uuid:a09dbbe1-634f-4e47-abd7-c7209c328e6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00004.warc.gz"}
Observational features of reflection asymmetric black holes The Kerr spacetime is symmetric with respect to a well-defined equatorial plane. When testing the equatorial reflection symmetry of an isolated black hole, one is at the same time testing the Kerr hypothesis in General Relativity. In this work, we investigate the possible observational features when a Keplerian disk is surrounding a rotating black hole without reflection symmetry. When such symmetry is broken, generically, the photon trajectories around the black hole and the Keplerian orbits on the accretion disk are distorted vertically away from the equatorial plane by an amount that depends on their distance to the black hole. In the reflection asymmetric spacetime we are considering, these two kinds of orbits are distorted in opposite directions. Interestingly, while the size and shape of black hole shadows closely resemble those of Kerr black holes, distinct observational characteristics can emerge in the disk image and emission line profiles. When observing the disk edge-on, a pronounced concave shape may appear along its innermost edge on the incoming side. Furthermore, distinctive horn-like features might be observed on the spectral line profile at the blue-shifted side. These special features can serve as compelling indicators of the reflection asymmetry present in rotating black holes. Journal of Cosmology and Astroparticle Physics Pub Date: September 2024 □ gravity; □ modified gravity; □ astrophysical fluid dynamics; □ General Relativity and Quantum Cosmology; □ Astrophysics - High Energy Astrophysical Phenomena; □ High Energy Physics - Theory 22 pages, 8 figures. Matching published version
{"url":"https://ui.adsabs.harvard.edu/abs/2024JCAP...09..043C","timestamp":"2024-11-13T20:03:23Z","content_type":"text/html","content_length":"40602","record_id":"<urn:uuid:220097d9-9abf-4553-a396-7c1966d945fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00259.warc.gz"}
PPT - Plant Productivity PowerPoint Presentation, free download - ID:43713 1. Plant Productivity Crystal, Barney, Nate, Rachael, Cameron, and Puja Atlantic Forest, Brazil SEE-U 2000 2. Introduction • Plants allocate their energy and resources in a manner that is conducive for efficient growth • Different species therefore may put more energy into the formation of roots or in the formation of shoots • By determining the root/shoot ratio we can study these growth patterns 3. Hypotheses • Null Hypothesis: There will be no difference among species in root/shoot ratio • Alternative Hypothesis #1: Native species (Acacia) will show a greater root/shoot ratio • Alternative Hypothesis #2: Non - native species (Eucalyptus) will show a greater root/shoot ratio 4. Methodology • Three species were studied: Eucalyptus camal, Eucalyptus citrio, and Acacia • 16 individuals of each species were randomly selected from the IPE Nursery • Soil was separated from the roots • Root length was measured from the first root to the root apical meristem • Shoot length was measured from the first root to the apical meristem 6. Results • Root/shoot ratios are as follows: • E. camal :3.8/1 • E. citrio: 3.3/1 • Acacia sp.:2.7/1 • The Null Hypothesis was accepted. 10. Discussion/Conclusion • A statistical analysis showed that there was no significant difference between species with respect to root/shoot ratio • Within species there was a wide range of root/ shoot ratio affecting the statistical analysis 11. Discussion/Conclusion (2) • This can be attributed to small sample size, cold weather (frost), and age of seedlings • There may be a greater variation of root/shoot ratios among the three species at a later stage of development
{"url":"https://fr.slideserve.com/betty_james/plant-productivity-powerpoint-ppt-presentation","timestamp":"2024-11-11T10:32:50Z","content_type":"text/html","content_length":"87387","record_id":"<urn:uuid:df1326e6-73a4-48f1-a5ca-fcb350b1324a>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00852.warc.gz"}
G. H. Hardy On December 1, 1947, English mathematician G. H. Hardy passed away. Hardy is known for his achievements in number theory and mathematical analysis, but also for his 1940 essay on the aesthetics of mathematics, A Mathematician’s Apology, and for mentoring the brilliant Indian mathematician Srinivasa Ramanujan. “A mathematician … has no material to work with but ideas, and so his patterns are likely to last longer, since ideas wear less with time… Read more Norbert Wiener and the Science of Cybernetics On November 26, 1894, American mathematician Norbert Wiener was born. Wiener established the science of cybernetics, a term he coined, which is concerned with the common factors of control and communication in living organisms, automatic machines, and organizations. He attained international renown by formulating some of the most important contributions to mathematics in the 20th century. “Scientific discovery consists in the interpretation for our own convenience of a system of existence which has… Read more Wilhelm Weinberg and the Genetic Equilibrium On January 13, 1908, German physician and obstetrician-gynecologist Wilhelm Weinberg delivered an exposition of his ideas on the principle of genetic equilibrium in a lecture before the Verein für vaterländische Naturkunde in Württemberg. He developed the idea of genetic equilibrium independently of British mathematician G. H. Hardy.[4] Wilhelm Weinberg – Early Years Wilhelm Weinberg was born in Stuttgart, Kingdom of Württemberg (today Germany). His father Julius Weinberg, a merchant, had Jewish roots, but he… Read more What’s your Erdös Number? – The bustling Life of Mathematician Paul Erdös On September 20, 1996, Hungarian mathematician Paul Erdös passed away. He published more scientific papers than any other mathematician in history, with hundreds of collaborators. Thus, he even created a ‘small world’ of its own, the famous club of people that posess an ‘Erdös Number‘. BTW, my Erdös number is 3, i.e. I have published a paper together with a co-author whose Erdös number is 2. In this little game of numbers,… Read more Although I Cannot Prove it… – The Famous Goldbach Conjecture On the 7th of June 1742, Prussian mathematician Christian Goldbach wrote a letter to his famous colleague Leonard Euler, which should make history. Well, at least in the mathematical world. In this letter Christian Goldbach refined an already previously stated conjecture from number theory concerning primes to his friend Euler, which by today is known as the famous Goldbach conjecture. It states: Every even integer greater than 2 can be expressed as the… Read more The Short Life of Srinivasa Ramanujan On December 22, 1887, Indian mathematician and autodidact Srinivasa Ramanujan was born. Though he had almost no formal training in pure mathematics, he made major contributions to mathematical analysis, number theory, infinite series, and continued fractions. Supported by English mathematician G. H. Hardy from Cambridge, Ramanujan independently compiled nearly 3,900 results during his short life, which all have been proven correct. “Sir, an equation has no meaning for me unless it expresses… Read more Sewall Wright and the Importance of Population Genetics On December 21, 1889, American geneticist Sewall Green Wright was born. Wright is known for his influential work on evolutionary theory and also for his work on path analysis. He was a founder of population genetics alongside Ronald Fisher and J.B.S. Haldane,[4] which was a major step in the development of the modern evolutionary synthesis combining genetics with evolution. Early Years and Academic Career Sewall Wright‘s father Philip Green Wright was a… Read more
{"url":"http://scihi.org/tag/g-h-hardy/","timestamp":"2024-11-07T19:38:49Z","content_type":"text/html","content_length":"589108","record_id":"<urn:uuid:4997be95-74fb-4dd5-b949-45404cd0829c>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00525.warc.gz"}
On the Convergence of Inexact Predictor-Corrector Methods for Linear Programming Interior point methods (IPMs) are a common approach for solving linear programs (LPs) with strong theoretical guarantees and solid empirical performance. The time complexity of these methods is dominated by the cost of solving a linear system of equations at each iteration. In common applications of linear programming, particularly in machine learning and scientific computing, the size of this linear system can become prohibitively large, requiring the use of iterative solvers, which provide an approximate solution to the linear system. However, approximately solving the linear system at each iteration of an IPM invalidates the theoretical guarantees of common IPM analyses. To remedy this, we theoretically and empirically analyze (slightly modified) predictor-corrector IPMs when using approximate linear solvers: our approach guarantees that, when certain conditions are satisfied, the number of IPM iterations does not increase and that the final solution remains feasible. We also provide practical instantiations of approximate linear solvers that satisfy these conditions for special classes of constraint matrices using randomized linear algebra. All Science Journal Classification (ASJC) codes • Artificial Intelligence • Software • Control and Systems Engineering • Statistics and Probability Dive into the research topics of 'On the Convergence of Inexact Predictor-Corrector Methods for Linear Programming'. Together they form a unique fingerprint.
{"url":"https://cris.iucc.ac.il/en/publications/on-the-convergence-of-inexact-predictor-corrector-methods-for-lin","timestamp":"2024-11-03T04:04:08Z","content_type":"text/html","content_length":"48999","record_id":"<urn:uuid:2593d041-2128-42be-bcec-03e8c31788fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00107.warc.gz"}
Explainability in Neural Networks: Path Methods for Attribution Explainability in Neural Networks, Part 4: Path Methods for Feature Attribution This post will delve deeper into Path Integrated Gradient Methods for Feature Attribution in Neural Networks. Training neural networks involves gradients and differentiation, and it turns out that one good way of explaining a neural network's behavior (in terms of its inputs) is by integrating gradients In the last post we formalized several desirable properties of feature-attribution methods as Axioms of Attribution, and the post concluded by saying Path Integrated Gradient Methods (or Path Methods in short) satisfy many of these axioms. This post will delve deeper into these methods. We continue to use the formulation from the ICML 2017 paper by Sundararajan et. al.^[1] The detailed derivation is our own, and assumes the reader has some familiarity with differential and integral calculus. Attribution along extremal paths is easy To understand the math behind Path Methods, it helps to recall our 2-dimensional example where the function computed by the neural network is $F(x) = \min(1, x_1 + x_2)$ (we will use this example in the rest of this post). Suppose we want to find the feature-attributions for the function value at point $P_3 = (1.2, 0.6)$. In the contour plot below we see two specific paths from the origin to $P_3$. Notice that these paths sequentially change one variable from 0 (the baseline value) to its final value at $P_3$: the red path changes $x_1$ first, then $x_2$. The blue path changes $x_2$ first, then $x_1$. There are of course an infinite number of possible paths from the origin to $P_3$, and paths such as these that sequentially change one variable at a time are called extremal paths (we informally called them "axis-parallel" paths in the last post). If the input $x$ is $d$-dimensional, there are $d!$ possible orderings of the input dimensions, and there would be $d!$ different extremal paths. 2D Contour Plot of $F(x) = \min(1, x_1 + x_2)$. Two possible paths (red and blue) are shown from the origin to $P_3$, and each segment is annotated by the change in $F(x)$ when moving toward $P_3$. What is nice about extremal paths is that we can easily compute the contribution of each variable along a specific path. In the above example, when moving from the origin toward $P_3$ along the blue path, on the first segment only $x_2$ changes (from 0 to its final value 0.6), and causes $F$ to increase by $0.6$, and on the second segment only $x_1$ changes, and causes $F$ to increase by $0.4$, which results in attributions $(0.4, 0.6)$. The reason it is simple to compute the contributions of each feature on an extremal path is that on each straight-line segment, only one feature changes. So this idea extends naturally to compute the feature-contributions along any step-like portion of a path, and we will see below how this is useful. Parametric representation of general paths Before moving on to discuss attribution along arbitrary paths, we should first consider how a general path can be represented. It is convenient to use a parametric representation of a path from the origin to $x$. Imagine we are moving along this path from the origin to $x$, and we introduce a parameter $\alpha$, which we will call the path position parameter, that represents the fraction of the path covered so far: this $\alpha$ uniquely identifies a point along the path. Then the path can be represented by a function $g$ that maps $\alpha$ to the $d$-dimensional coordinates of the unique point on the path that corresponds to $\alpha$. Thus $g(\alpha)$ represents the $d$-dimensional coordinates of the point on the path at position $\alpha$. In particular $g(0)$ is the origin (all zero coordinates), and $g(1) = x$. We write $g_i (\alpha)$ to denote the $i$'th coordinate of $g(\alpha)$. We will only consider smooth functions $g$, i.e. those that are differentiable. Attribution on general paths by step-approximation So how can we do attribution along an arbitrary path $\Pi$ from the origin to $P_3$? Here's a simple idea: Since we already know how to do attribution along any "step-like" path, we approximate the path $\Pi$ with a sequence of infinitesimally small steps, as shown in the figure below. An example of a general path from the origin to $P_3$, shown in black. The grey path is a step-approximation of this path. Let's zoom in and look at a specific step $ABC$ in this step-approximation, as shown in figure below: A zoomed-in view of a single step $ABC$ in the step-approximation of the path in the above figure. Point $A$ corresponds to position-parameter $\alpha$ along the path, and has coordinates $g(\alpha) $, and $C$ corresponds to $\alpha + d\alpha$. The contribution of dimension 1 to the function value change along segment $AB$ is shown. Path Integrated Gradients In the above figure, suppose the point $A$ on the path corresponds to path-position $\alpha$ in the parametric path-representation $g$, and $C$ corresponds to $\alpha + d\alpha$ (remember these are infinitesimally-small steps). So the coordinates at $A$ are $g(\alpha)$ and the corresponding function value is $F(g(\alpha))$. As we move along the path from $A$ to $C$, i.e., as $\alpha$ increases by an infinitesimal amount $d\alpha$, the value of the function $F$ changes by a certain amount, and we want to calculate how much the features $x_1$ and $x_2$ contribute to this change. First let's compute the change in $x_1$ when $\alpha$ increases by $d\alpha$. This is given by dg_1(\alpha) = \frac{ \partial g_1(\alpha) }{\partial \alpha} d\alpha, and the portion of the function change attributable to this $x_1$ change is $dg_1(\alpha)$ times the gradient of $F$ with respect to dimension 1 at path-position $\alpha$, which is denoted by $\ partial_1 F(g(\alpha))$ (we are using the shorthand $\partial_1$ for the partial derivative with respect to dimension 1), and thus the contribution of $x_1$ along this infinitesimal step $d\alpha$ \partial_1 F(g(\alpha)) dg_1(\alpha) \; = \; \partial_1 F(g(\alpha)) \frac{ \partial g_1(\alpha) }{\partial \alpha} d\alpha Summing these contributions over all path-positions $\alpha$ yields an integral for the total contribution of feature $x_1$ along the path $\Pi$, and generalizing to any dimension $i$, we can write this definition: Path Integrated Gradient attribution of function $F$ at $x$, along path $\Pi$, for dimension $i$: A^{F,\Pi}_i(x) = \int_0^1 \partial_i F(g(\alpha)) \frac{ \partial g_i(\alpha) }{\partial \alpha} d\alpha. The reason for the name "Integrated Gradient" should be clear -- we are integrating an expression involving the gradient of the final output of the neural network with respect to the input features. It's important to realize that these gradients are not the same as the gradients used when training the network: the latter are gradients of the loss (which is, roughly, a measure of the "error" between the network output and the desired output) with respect to the network parameters. As a special case of the above definition, note that the straight line path from the origin to $x$ is given by the parametric representation $g(\alpha) = \alpha x$: as $\alpha$ changes from 0 to 1, we are uniformly scaling all coordinates from the 0-vector until they reach $x$. For this case, note that $g_i(\alpha) = \alpha x_i$ and $\partial g_i(\alpha)/\partial \alpha = x_i$, which implies the following simplified attribution expression, referred to by Sundararajan et. al.^[1:1] as the Integrated Gradient: Integrated Gradient (IG) Attribution (with baseline = all-zeros vector) A^{F}_i(x) = x_i \int_0^1 \partial_i F(\alpha x) d\alpha Let's apply the IG method to compute the feature attributions of the function value at $P_3=(1.2, 0.6)$, using the straight line from the origin to $P_3$ in our example where $F(x) = \min(1, x_1 + x_2)$, as shown in the figure below: 2D contour map of the function $F(x) = \min(1, x_1 + x_2)$. Feature attribution for $F$ at point $P_3$ is computed using the IG method along the straight line path from the origin to $P_3$ (where $F= 1$). This path intersects the grey line $x_1 + x_2=1$ at the point $A = (\frac{2}{3}, \frac{1}{3})$. At any point on the path from the origin until $A = (2/3, 1/3)$, the sum of the coordinates is at most 1, so the value of $F$ is essentially $x_1 + x_2$, and therefore the gradients $\partial_i F(\ alpha x)$ are 1 for both dimensions $i = 1,2$. Between point $A$ and $P_3$, the coordinates add up to at least 1 so $F$ is saturated at 1, and both gradients vanish. The value of $\alpha$ corresponding to point $A$ is $1/(3 \times 0.6) = 1/1.8$. Thus from the above expression, the IG-based contribution of dimension 1 to the function value at $P_3$ is x_1 \int_0^1 \partial_1 F(\alpha x) d\alpha \;=\; 1.2 \int_0^{\frac{1}{1.8}} 1 \, d\alpha \;=\; \frac{1.2}{1.8} \;=\; \frac{2}{3}, and similarly the attribution to dimension 2 is $0.6/1.8 = 1/3$. Notice how this attribution $(2/3, 1/3)$ is more reasonable than both the attributions we saw earlier using the two extremal paths (see the first figure above): • the red path gives $(1,0)$; dimension 2 receiving zero attribution is non-sensical, and • the blue path gives $(0.4, 0.6)$; dimension 2 receiving a larger attribution does not seem "fair" given that $x_1=1.2$ is twice as large as $x_2 = 0.6$. As mentioned in the previous post, the IG method is the only one among path methods that satisfies all the axioms stated in that post. We should also point out that the integrals in the definitions above are instances of path integrals which have a rich history in Mathematics^[2] and Physics^[3], and in particular Richard Feynman introduced a version of path integrals in his formulation of Quantum Mechanics^[4]. In a future post we will look at other properties of Path Methods and their variants, how the IG attribution can be computed in general, and some simple models where the IG has a closed form analytic
{"url":"https://www.altacognita.com/path-methods/","timestamp":"2024-11-08T15:08:38Z","content_type":"text/html","content_length":"38239","record_id":"<urn:uuid:79701a6c-70ea-45b9-9249-e6463a39df82>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00345.warc.gz"}
Homework 0 COMS 4771 solved Problem 1 (Values). Name one or two of your own personal, academic, or career values, and explain how you hope machine learning can be of service to those values. Problem 2 (Stuff you must know). The course website http://www.cs.columbia.edu/~djhsu/ coms4771-f16/ has information about the course prerequisites, course requirements, academic rules of conduct, and other information. You are required to understand this information and abide by the rules of conduct, regardless of whether or not you can solve the following problems. (a) True or false: I may share my homework write-up or code with another student as long as (1) the write-up only contains solutions for at most half of the problems, (2) the code is at most five lines, and (3) we list each other as discussion partners on the submitted write-up. (b) True or false: I may use any outside reference material to help me solve the homework problems as long as I appropriately acknowledge these materials in the submitted write-up. Problem 3 (More stuff you should know). We’ll use the notation f : X → Y to declare a function f whose domain is the set X , and whose range is the set Y. For example, f : R → R declares a real-valued function over the real line. For a positive integer d, the d-dimensional vector space called Euclidean space is denoted by R . For positive integers m and n, the space of m×n matrices over the real field R is denoted by R . Every matrix in R can be regarded as a linear map from R to R Let A, B ∈ R 2×2 be given by A := 2 4# , B := 0 4# (“:=” is the notation used for “equals by definition”.) Also let u := (2, 1) and v := (1, 2), which are vectors in R . Note that when we refer to vectors from Euclidean spaces in the context of matrix-vector products, we always regard vectors (like u) as column vectors, and their transposes (like u >) as row vectors: u = , u > = 2 1i (a) What is the rank of A? (b) What is Au + Bv? (c) What is u (d) The (Euclidean) norm (or length) of a vector x = (x1, x2, . . . , xd) ∈ R is denoted by kxk2, and is equal to q 1 + x 2 + · · · + x . What is kuk2? (e) Let f : R 2 → R be the function defined by f(x) := x (A + B)x . The gradient of a real-valued function g : R d → R at a point z ∈ R , denoted by ∇g(z), is the vector λ = (λ1, λ2, . . . , λd) where for all i = 1, 2, . . . , d . What is ∇f(v)? (f) The unit circle in R is the set of vectors in R 2 with unit length, i.e., {x ∈ R : kxk2 = 1}. Which vector in the unit circle minimizes f (defined above), and what is the value of f evaluated at this vector? (Hint: think about eigenvectors.) Problem 4 (Random stuff you should know). A (discrete) probability space is a pair (Ω, P), where Ω is a (discrete) set called the sample space, and P : Ω → R is a real-valued function on Ω called the probability distribution, which must satisfy P(ω) ≥ 0 for all ω ∈ Ω, and P ω∈Ω P(ω) = 1. An event A is a subset of Ω, and the probability of A, denoted by P(A) (somewhat abusing notation), is equal to P ω∈A P(ω). (a) A fair coin is tossed three times. Consider the three events: • A: the outcome of the first toss is heads. • B: the outcome of the second toss is tails. • C: the outcomes of all three tosses are the same. • D: exactly one of the outcomes is heads. Which of the following pairs of events are independent? • A and B. • A and C. • A and D. • C and D. (b) A student applies to two schools: Trump University and Columbia University. The student has a probability of 0.5 of being accepted to Trump, and a probability of 0.3 of being accepted to Columbia. The probability of being accepted by both is 0.2. What is the probability that the student is accepted to Columbia, given that the student is accepted at Trump? A random variable (r.v.) on (Ω, P) is a real-valued function X : Ω → R. The notation X ∼ P declares the r.v. X and associates it with the probability distribution P. (We’ll often leave the probability space implicit.) The expected value (a.k.a. expectation or mean) of X, written E(X), is the average value of X under the distribution P: E(X) := X(ω) · P(ω). An equivalent definition of E(X) is E(X) := x · P(X = x), where the summation is taken over all x in the range of X, and P(X = x) is shorthand for P({ω ∈ Ω : X(ω) = x}). (c) Consider the sample space Ω = {1, 2, . . . , 6} × {1, 2, . . . , 6}, and let P be the uniform distribution over Ω, i.e., P(a, b) = 1/36 for each (a, b) ∈ Ω. Let X be the random variable defined by X(a, b) = min{a, b} for each (a, b) ∈ Ω. For each x ∈ {1, 2, . . . , 6}, what is P(X = x)? (d) Continuing from (c), what is the expected value of X? (e) A biased coin with P(heads) = 1/5 is tossed repeatedly until heads comes up. What is the expected number of tosses? (f) You create a random sentence of length n by repeatedly picking words at random from the vocabulary {a, is, not,rose}, with each word being equally likely to be picked. What is the expected number of times that the phrase “a rose is a rose” will appear in the sentence? Problem 5 (More random stuff you should know). We often encounter probability spaces (Ω, P) where Ω is not a discrete set. In this class, the only random variables we’ll consider on such spaces will either have a discrete image (i.e., {X(ω) : ω ∈ Ω} is a discrete set) or have a probability density function p: R → R, which is a non-negative real-valued function on R such that, for any open interval (a, b) = {x ∈ R : a < x < b} ⊆ R, P(X ∈ (a, b)) = P({ω ∈ Ω : X(ω) ∈ (a, b)}) = Z (a,b) p(x) dx . Random variables with probability density functions will be called continuous random variables. (a) Let X be a continuous random variable with probability density function p given by p(x) := ( 0 if x < 0 , λe−λx if x ≥ 0 . Here, λ is a positive number (typically called the rate parameter). If P(X ≤ 1000000) = 0.5, then what is the value of λ? (b) Let X be a standard normal random variable, i.e., a continuous random variable whose density is the standard normal density p(x) := e −x 2/2/ √ 2π for all x ∈ R. Define the random variable Y on the same probability space as X by Y := X2 , i.e., Y (ω) := X(ω) 2 for all ω ∈ Ω. What are E(X) and E(Y )? A collection of continuous random variables X1, X2, . . . , Xd, all defined on the same probability space, has a (joint) probability density function p: R d → R if, for any A ⊆ R d , P((X1, X2, . . . , Xd) ∈ A) = Z A p(x1, x2, . . . , xd) dx1 dx2 · · · dxd . We’ll often collect several random variables, such as X1, X2, . . . , Xd, into a random vector X = (X1, X2, . . . , Xd). So the equation above can be written as P(X ∈ A) = R A p(x) dx. (c) Suppose the pair of random variables (X1, X2) has probability density function p given by p(x1, x2) := ( c if 0 ≤ x1 ≤ 0.5 and 0 ≤ x2 ≤ 1 , 0 otherwise . Here, c is a constant (that does not depend on x1 or x2). What should be the value of c so that p is a valid probability density function? (d) Continuing from (c), what is the probability that X2 ≥ X1? (e) Continuing from (c), define another random variable Y on the same probability space as X1 and X2 by Y := ( 1 if X1 > 2X2 , −1 otherwise . Are X1 and Y independent? What is the expected value of Y ? (f) Continuing from (c), define yet another random variable Z on the same probability space as X1 and X2 by Z := 1 if X2 > 1/2 , −1 otherwise . Are X1 and Z independent? What is the expected value of X1Z? Problem 6 (Google Cloud; optional but recommended). Set up a virtual machine on Google Cloud. Figure out how to install some useful Python packages like numpy, scipy, scikit-learn, etc. Download the OCR image data set ocr.mat from Courseworks, and load it into memory: from scipy . io import loadmat ocr = loadmat (‘ocr . mat ‘) This file contains four different matrices called data, labels, testdata, and testlabels. For example, data represents a 60000×784 matrix, which you can verify using the following command: ocr [‘data ‘]. shape Using the numpy and scipy libraries, write some code to compute the average squared Euclidean norm of the rows of data. The following functions may be useful: • numpy.apply_along_axis • numpy.linalg.norm • numpy.mean The result should be around 127.642. You don’t need to submit anything for this problem.
{"url":"https://codeshive.com/questions-and-answers/homework-0-coms-4771-solved/","timestamp":"2024-11-08T18:08:19Z","content_type":"text/html","content_length":"113252","record_id":"<urn:uuid:77e5e03c-0b10-472b-94f3-4a512e2bda09>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00533.warc.gz"}
Another five minute challenge^1, this time from /r/dailyprogrammer: given any positive integer, create and render a factor tree. The basic idea is straight forward enough. Each positive integer of note^2 is in one of two classes: either it is prime or a composite. For the composite numbers, there are at least two numbers m and n such that neither m nor n is 1 and mn equals that number. For example, 6 is composite because 2 * 3 = 6, yet 5 is not, since the only numbers that divide it are 1 and itself. Since 5 is not composite, it only makes sense that it is prime. But then, what if you have a bigger number, such as 24. You can break that into 4 * 6. But neither of those is prime, so you can further break it into (2 * 2) * (2 * 3). Finally, each of those is prime. All together, that makes up what is called a factor tree: That’s the challenge this week. Generate that tree. Well, that’s more than enough description. Let’s get to it. Basically, there’s a quick (albeit not perfectly efficient) way to find factors: trial division. Basically, you loop through all of the numbers from 2 to the square root of the number (any larger and you’ll find factors you’ve already found), trying to divide by each in turn. That though, generates this image rather than the previous: Not quite as nice and balanced. Easily fixed though. Rather than looping from 2 up, loop from the square root down. You’ll find the same factors, but you’ll find the largest (and thus the most likely split) first. ; Return a tree of the factors of n (define (factor-tree n) ; Try to find the first pair of factors ; Start from sqrt(n) and work down to get the largest factors first (for/first ([i (in-range (integer-sqrt n) 1 -1)] #:when (zero? (remainder n i))) ; Factor, create a tree with that node and it's further factors (list n (factor-tree i) (factor-tree (quotient n i)))) ; If for/first returns #f there are no other factors, n is prime The comments should be straight forward enough to explain the rest of the structure. for/first will return the first factor that we’ve found (if any) or #f if not (which then falls through to the next case). That gives us this structure: > (factor-tree 24) '(24 (4 2 2) (6 2 3)) It’s perhaps a bit odd to read, but look at the first of each triple. 24 has factors 4 and 6. 4 has factors 2 and 2, 6 has 2 and 3. A bit larger example (formatted to make it a bit easier to read): > (factor-tree 1767150) '(1767150 (1309 17 (77 7 11)) (1350 (30 5 (6 2 3)) (45 5 (9 3 3)))) Speaking of which, how am I getting those nice images? Well, to some extent, I’m cheating. I took the code that I’d written a while ago for the c211-lib/tree library, designed to render trees. All I needed to do was rewrite the match to match against list instead of tree: ; Render a tree structure ; Tree : (U (List Integer Tree Tree) Integer) (define (render-factor-tree tr) (match tr ; Recursive tree, unpack the value and render subtrees [(list factor left right) (define v (text (~a factor))) (define l (render-factor-tree left)) (define r (render-factor-tree right)) ; Pin-line connects the nodes, append sets the trees side by side ; cb/ct-find tells the pins how to connect to the nodes (center bottom/top) (pin-line (pin-line (vc-append 10 v (ht-append 10 l r)) v cb-find l ct-find) v cb-find r ct-find)] ; Values are directly rendered (text (~a prime))])) The interesting parts are the functions text which turns text into an image, pin-line which draws lines between two images, and vc-append / ht-append to combine them vertically centered or horizontal aligned to the top. All together, it lets us render all sorts of nice trees: > (render-factor-tree (factor-tree 828441)) > (render-factor-tree (factor-tree 863029)) > (render-factor-tree (factor-tree 1048576)) And that’s about it. Quick enough (even if the rendering probably took a bit more than five minutes when I first wrote it). As always, you can see the entire code for this (and most of my other small projects) on GitHub: factor-tree.rkt 1. I have a few longer posts in the works, I promise ↩︎ 2. I don’t quite recall how 1 is treated ↩︎
{"url":"https://blog.jverkamp.com/2014/06/17/factor-trees/","timestamp":"2024-11-07T14:06:02Z","content_type":"text/html","content_length":"21783","record_id":"<urn:uuid:6f6371f2-573e-4a5c-9967-3307d3279c84>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00052.warc.gz"}
Weighted Average Cost Calculator: Understanding Costs Table of Contents : When it comes to business and finance, understanding costs is crucial for making informed decisions. One method of calculating costs that stands out is the Weighted Average Cost (WAC). This approach provides a more nuanced view of costs by taking into account the varying values of items in inventory. In this blog post, we will delve into what the Weighted Average Cost is, how to calculate it, and its significance in inventory management and financial analysis. Let’s explore the intricacies of WAC, step by step! 📊 What is Weighted Average Cost? Weighted Average Cost refers to a valuation method used to determine the average cost of goods available for sale during a period. Unlike simple averages, WAC gives different weights to items based on their quantities and costs. This method is particularly useful for businesses that maintain inventory of similar items with different purchase prices. Why is WAC Important? Understanding the Weighted Average Cost is vital for several reasons: • Accurate Inventory Valuation: WAC provides a more accurate reflection of inventory costs than the First-In, First-Out (FIFO) or Last-In, First-Out (LIFO) methods. • Profit Margin Calculation: It aids businesses in calculating gross margins accurately, helping in pricing decisions and cost control. 💰 • Financial Reporting: WAC is widely accepted in financial reporting and tax calculations, which ensures compliance and accuracy in financial statements. How to Calculate Weighted Average Cost Calculating the Weighted Average Cost involves a straightforward formula. Here’s a step-by-step guide to help you through the process. Formula for Weighted Average Cost The formula for WAC is: [ \text{WAC} = \frac{\text{Total Cost of Goods Available for Sale}}{\text{Total Units Available for Sale}} ] Step-by-Step Calculation 1. Determine the Total Cost of Goods Available for Sale: This includes the cost of all items in inventory. 2. Calculate the Total Units Available for Sale: This is simply the sum of all units in inventory. 3. Plug the values into the formula: Substitute the values from steps 1 and 2 into the WAC formula. Example Calculation Let’s say you have the following inventory purchases: Date Quantity Cost per Unit Total Cost Jan 5 100 $10 $1,000 Feb 10 150 $12 $1,800 Mar 15 200 $14 $2,800 Step 1: Calculate the Total Cost: • $1,000 + $1,800 + $2,800 = $5,600 Step 2: Calculate the Total Units: Step 3: Calculate WAC: [ \text{WAC} = \frac{5600}{450} \approx 12.44 ] So, the Weighted Average Cost per unit is approximately $12.44. Important Note: The WAC method can change based on market conditions, as item costs fluctuate with time. Businesses should regularly update their calculations to maintain accuracy. Applications of Weighted Average Cost Inventory Management WAC is crucial for managing inventory effectively. By understanding the average cost of goods, businesses can: • Set competitive prices without undercutting profitability. • Make informed purchasing decisions based on cost predictions. • Plan for budget allocations and financial forecasting. Financial Analysis In addition to inventory management, the WAC method plays a significant role in: • Cost Control: By knowing the average cost of items, companies can implement cost-saving measures effectively. • Investment Decisions: Investors can analyze the cost-effectiveness of inventory management strategies by using WAC. Advantages and Disadvantages of Weighted Average Cost 1. Simplicity: The calculation is straightforward and easy to implement. 📈 2. Stability: WAC smoothens out price fluctuations over time, giving a consistent cost figure. 3. Good for Homogeneous Products: It is particularly useful for businesses with large inventories of similar items. 1. Less Reflective of Current Costs: In times of rapidly fluctuating prices, WAC might not reflect current market values accurately. 2. Not Suitable for all Inventory Types: Businesses with unique or non-homogeneous products might find other methods like FIFO or LIFO more applicable. Understanding the Weighted Average Cost (WAC) is integral for any business involved in inventory management and financial planning. By calculating WAC correctly, companies can ensure accurate inventory valuation, make better financial decisions, and ultimately improve profitability. As we’ve explored, WAC provides a holistic view of costs, making it a preferred method for many businesses. Embracing the Weighted Average Cost approach can lead to more informed business decisions, ultimately driving success in today's competitive marketplace. Remember, consistent reevaluation of costs can aid in maintaining financial health, allowing businesses to thrive! 🌟
{"url":"https://tek-lin-pop.tekniq.com/projects/weighted-average-cost-calculator-understanding-costs","timestamp":"2024-11-08T09:12:59Z","content_type":"text/html","content_length":"85862","record_id":"<urn:uuid:d4209de2-79dc-4cc6-956f-5a58ed7818a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00709.warc.gz"}
Council Tax in AB24 5JE The council tax and business rates billing authority for AB24 5JE is Aberdeen City Council. Council Tax in AB24 5JE Address Band Amount ^* First Floor Whole, 47 Summerfield Terrace, Aberdeen, AB24 5JE C £1,094 35b Summerfield Terrace, Aberdeen, AB24 5JE A £820 35c Summerfield Terrace, Aberdeen, AB24 5JE A £820 35d Summerfield Terrace, Aberdeen, AB24 5JE A £820 35e Summerfield Terrace, Aberdeen, AB24 5JE A £820 35f Summerfield Terrace, Aberdeen, AB24 5JE A £820 37a Summerfield Terrace, Aberdeen, AB24 5JE A £820 37b Summerfield Terrace, Aberdeen, AB24 5JE A £820 37c Summerfield Terrace, Aberdeen, AB24 5JE A £820 37d Summerfield Terrace, Aberdeen, AB24 5JE A £820 37e Summerfield Terrace, Aberdeen, AB24 5JE A £820 37f Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Left, 39 Summerfield Terrace, Aberdeen, AB24 5JE A £820 First Floor Left, 39 Summerfield Terrace, Aberdeen, AB24 5JE A £820 First Floor Right, 39 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Second Floor Right, 39 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Right, 39 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Second Floor Left, 39 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Left, 41 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Right, 41 Summerfield Terrace, Aberdeen, AB24 5JE A £820 First Floor Right, 41 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Attic Floor Right, 41 Summerfield Terrace, Aberdeen, AB24 5JE A £820 First Floor Left, 41 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Second Floor Right, 41 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Second Floor Left, 41 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Attic Floor Left, 41 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Left, 43 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Right, 43 Summerfield Terrace, Aberdeen, AB24 5JE A £820 First Floor Left, 43 Summerfield Terrace, Aberdeen, AB24 5JE A £820 First Floor Right, 43 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Second Floor Left, 43 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Second Floor Right, 43 Summerfield Terrace, Aberdeen, AB24 5JE B £957 Attic Floor Left, 43 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Attic Floor Right, 43 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Left, 45 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Right, 45 Summerfield Terrace, Aberdeen, AB24 5JE A £820 First Floor Left, 45 Summerfield Terrace, Aberdeen, AB24 5JE A £820 First Floor Right, 45 Summerfield Terrace, Aberdeen, AB24 5JE B £957 Second Floor Left, 45 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Attic Floor Right, 45 Summerfield Terrace, Aberdeen, AB24 5JE B £957 Second Floor Right, 45 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Attic Floor Left, 45 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Second Floor Right, 47 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Attic Floor Left, 47 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Attic Floor Right, 47 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Second Floor Left, 47 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Left, 47 Summerfield Terrace, Aberdeen, AB24 5JE A £820 Ground Floor Right, 47 Summerfield Terrace, Aberdeen, AB24 5JE A £820 We have no records of any properties subject to Business Rates in AB24 5JE. Some premises are exempt from Business Rates, including agricultural land and buildings (including fish farms), buildings registered for public worship or church halls, and buildings used for training or welfare of disabled people. If any are present in AB24 5JE they will not be listed here.
{"url":"https://counciltaxrates.info/ab245je","timestamp":"2024-11-11T03:35:07Z","content_type":"text/html","content_length":"19249","record_id":"<urn:uuid:2d799713-15a4-4404-abde-0950128e8c9b>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00813.warc.gz"}
How to Calculate Average in Excel ? - Trendblog.netHow to Calculate Average in Excel ? How to calculate average? Let’s imagine you’re looking for the average number of days it takes different employees to accomplish a task. Alternatively, you may wish to determine the average temperature on a specific day during a 10-year period. The average of a bunch of numbers can be calculated in a variety of ways. The AVERAGE function calculates central tendency or the position of a bunch of integers in a statistical distribution. The following are the three most prevalent measurements of central tendency How to Calculate Average This is the arithmetic mean, which is derived by multiplying a collection of integers by the count of those numbers. For example, 30 divided by 6, which is 5, is the average of 2, 3, 3, 5, 7, and 10. The number in the midst of a set of numbers. Half of the numbers have values higher than the median, while the other half have values lower than the median. The median of 2, 3, 3, 5, 7, and 10 is 4, for example. Mode The number that appears the most frequently in a bunch of numbers. The mode of 2, 3, 3, 5, 7, and 10 is 3, for example. These three measures of central tendency are exactly the same for a symmetrical distribution of a collection of numbers. They can be different in a skewed distribution of a bunch of numbers. They can be different in a skewed distribution of a bunch of numbers. Calculate the average of numbers in a row or column that are adjacent. Perform the following actions: To determine the average, click a cell below or to the right of the numbers you wish to average. Click the arrow next to AutoSum in the Editing group on the Home tab, then click Average, and then hit Enter. Calculate the average of two or more numbers that are not in the same row or column. Use the AVERAGE function to complete this task. Fill in the blanks in the table below. Formula explanation (Result) Adds up all of the numbers in the list above (9.5) =AVERAGE(A2:A4,A7) is a function that averages the first three and last numbers in a list (7.5) =AVERAGEIF(A2:A7, “>0”) =AVERAGEIF(A2:A7, “>0”) The numbers in the list are averaged, except for those that contain zero, such as cell A6 (11.4) Make a weighted average calculation. Use the SUMPRODUCT and SUM functions to complete this operation. vIn this example, the average price paid for a unit is calculated based on three separate purchases, each of which is for a different number of units at a different price per unit. A B Cost per unit =SUMPRODUCT(A2:A4,B2:B4)/SUMPRODUCT(A2:A4,B2:B4)/SUMPRODUCT(A2:A4,B2:B4)/SUMPRODUCT (B2:B4) Calculates the total cost of all three orders by multiplying the total number of units ordered by the total cost of all three orders (24.66) Calculate the average of a set of numbers while omitting zeros. Use the AVERAGE and IF functions to complete this task. Copy the table below, keeping in mind that copying it to a blank worksheet may make it easier to grasp. =AVERAGEIF(A2:A7, “>0”) =AVERAGEIF(A2:A7, “>0”) =AVERAGEIF(A2:A7, “>0”) =AVER The numbers in the list are averaged, except for those that contain zero, such as cell A6 (11.4) Also Read: Reset Airpods Within Minutes: Reasons Why Airpods Won’t Reset Properly How to Change Directory in CMD: 3 Ways to Modify: Examples Restore Whatsapp Backup Using 3 Effective Ways 4 Ways to Enter Safe Mode Windows 10 Q1: How do you figure out the average? Average This is the arithmetic mean, which is derived by multiplying a collection of integers by the count of those numbers. For example, 30 divided by 6, which is 5, is the average of 2, 3, 3, 5, 7, and 10. Q2: What are the three methods for calculating the average? Averages are divided into three categories. These are known as the mean, median, and mode. Q3: How do you calculate the average of two percentages? In this case, you must first divide the sum of the two percentage figures by the sum of the two sample sizes to obtain the average percentage of the two percentages. As a result, 95 divided by 350 yields 0.27. The average percentage is then calculated by multiplying this decimal by 100. As a result, 0.27 multiplied by 100 = 27 percent. Q4: In Excel, how do you write an average formula? To determine the average, click a cell below the column or to the right of the row of the numbers you wish to average. Click the arrow next to AutoSum > Average on the HOME tab, then hit Enter.
{"url":"https://trendblog.net/how-to-calculate-average/","timestamp":"2024-11-13T14:05:58Z","content_type":"text/html","content_length":"121983","record_id":"<urn:uuid:34a6bdb6-df07-431c-a42e-aa6bae3efe4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00621.warc.gz"}
Hi, I’m Xiao Jie. This is an R package I wrote based on my own data analysis needs. I’m glad you found it. I will update some useful functions here on the public account “bioinfoplanet” and also do some other sharing. if(!require(tinyarray))devtools::install_github("xjsun1221/tinyarray",upgrade = FALSE,dependencies = TRUE) Click the green button “code” on this page, then click “Download ZIP” to download it to your working directory. Install it with devtools::install_local("tinyarray-master.zip",upgrade = F,dependencies = T). ggheat() is a function from the ggplot2 package that can be used to create heatmaps. It is still relatively immature and mainly used for aligning plots and collecting legends. Something about ggheat(): https://mp.weixin.qq.com/s/WhsBf6QAhVXeXeScM59cSA 2.Downstream Analysis of Gene Expression Array Data from GEO Database geo_download(): Provide a GEO number and get back the expression matrix, clinical information table, and platform number used. find_anno(): Look up the annotation of the array platform. get_deg(): Provide the array expression matrix, grouping information, probe annotation and get back the differential analysis results. multi_deg(): Differential analysis for multiple groups (up to 5). If you want to do differential analysis and get the common figures in one step, you can use get_deg_all() and multi_deg_all(). This part mainly integrates and simplifies the differential analysis of GEOquery, Annoprobe, and limma. quick_enrich(): Simple and intuitive enrichment analysis. double_enrich(): Separate enrichment of up- and down-regulated genes, combined with plotting. 3.Exploring Expression Matrices make_tcga_group(): Quickly get the grouping according to the TCGA sample naming rules. sam_filter(): Remove duplicate tumor samples in TCGA. match_exp_cl(): Match TCGA expression matrix with clinical information. trans_array(): Replace the row names of the matrix, such as replacing the probe names of the expression matrix with gene names. trans_exp(): Convert TCGA or TCGA+GTeX data to gene IDs (old version, genecode v22 or v23) trans_exp_new(): Convert TCGA or TCGA+GTeX data to gene IDs(new versions) t_choose(): Do t-tests for individual genes in batches. cor.full() and cor.one(): Calculate correlations between genes in batches. 4.Survival Analysis and Visualization point_cut(): Calculate the best cutoff point for survival analysis in batches. surv_KM(): Do KM survival analysis in batches, supporting grouping with the best cutoff point. surv_cox(): Do single factor Cox in batches, supporting grouping with the best cutoff point. risk_plot(): Risk factor three-way linkage. exp_boxplot(): Draw T-N boxplot for the interested genes. exp_surv(): Draw KM-plot for the interested genes. box_surv(): Draw boxplot and KM-plot for the interested genes. 5.Something about network graph hypertest(): Do hypergeometric distribution test for mRNA and lncRNA in batches. plcortest(): Do correlation test for mRNA and lncRNA in batches. interaction_to_edges(): Generate the connection table for the network graph based on the relationship table. edges_to_nodes(): Generate the node table based on the connection table. dumd(): Count how many values each column of the data frame has. intersect_all(): Take the intersection of any number of vectors. union_all(): Take the union of any number of vectors.
{"url":"https://cran.case.edu/web/packages/tinyarray/readme/README.html","timestamp":"2024-11-03T03:59:03Z","content_type":"application/xhtml+xml","content_length":"5681","record_id":"<urn:uuid:d28f837c-74d0-4b19-8b01-2dafc2d932f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00622.warc.gz"}
Simultaneous nonlinear equations simultaneous nonlinear equations Related topics: steps to factoring trinomials maths bearing worksheets algebra homework help equation editor in microsoft office course design for secondary math ii radical fractions online algebra equation calculators Free Online Balancing Equation Calculator polar to rectangular converter ti 89 commutative property of multiplication worksheet calculating fractions Author Message Sreamere Posted: Tuesday 22nd of Dec 11:38 I'm having great difficulty knowing the logic behind the problem regarding simultaneous nonlinear equations. Can anybody please assist me to understand how to come up with a comprehensive answer and explanation about simultaneous nonlinear equations specifically in topic of radical expressions? I was taught how to solve this before but now I forgot and confused how to answer it. I find it difficult to understand it alone so I think I need help since I think I can’t do this on my own . If anyone knows about simultaneous nonlinear equations can you please help me? Thanks! Back to top Jahm Xjardx Posted: Wednesday 23rd of Dec 13:53 The best way to get this done is using Algebrator software. This software offers a very fast and easy to learn means of doing math problems. You will definitely start liking math once you use and see how effortless it is. I remember how I used to have a difficult time with my Basic Math class and now with the help of Algebrator, learning is so much fun. I am sure you will get help with simultaneous nonlinear equations problems here. Denmark, EU Back to top DVH Posted: Friday 25th of Dec 13:04 Algebrator is used by almost every student in our class. Most of the students in my class work in the evening . Our teacher introduced this tool to us and we all have been using it since Back to top plotodecx Posted: Saturday 26th of Dec 08:31 Hello again. Thanks a ton for the beneficial advice. I usually never trust math programs; however, this piece of software seems worth trying. Can I get a URL to it? From: So. Back to top Noddzj99 Posted: Sunday 27th of Dec 13:29 This is where I found it : https://softmath.com/ordering-algebra.html. There’s nothing to lose because Algebrator has a unlimited money back offer , Hope it helps you too! From: the Back to top
{"url":"https://softmath.com/algebra-software/long-division/simultaneous-nonlinear.html","timestamp":"2024-11-11T17:38:35Z","content_type":"text/html","content_length":"41067","record_id":"<urn:uuid:60e50273-3a2b-408c-861a-028a20445c33>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00783.warc.gz"}
GMAT Word Problem : Quadratic Equations | Wizako GMAT Prep Blog This question is a word problem. A problem solving question that tests your ability to frame an equation and solve it to get the answer. 3 women and a few men participated in a chess tournament. Each player played two matches with each of the other players. If the number of matches that men played among themselves is 78 more than those they played with the women, how many more men than women participated in the tournament? A. 11 B. 13 C. 14 D. 10 E. 8 Correct Answer : 10. Choice D Explanatory Answer Let the number of men who participated in the tournament be ‘n’ Each player played two matches against all the other players. So, the n men would have played 2 * 3 * n = 6n matches against the women. Each of the n men would have played two matches among themselves. Each man plays with each of the other (n – 1) players. So, the number of matches among the men = n(n -1) We know that the men have played 78 more matches among themselves than against the women. i.e., n(n -1) = 78 + 6n or n^2 – n – 6n – 78 = 0 or n^2 – 7n – 78 = 0 Factorizing, we get (n – 13)(n + 6) = 0 or n = 13 or n = -6. n cannot be negative. Hence, n = 13. Number of men = 13. Number of women = 3. So, there are 10 more men than the number of women in the tournament.
{"url":"https://gmat-prep-blog.wizako.com/gmat-quant-practice/algebra/gmat-word-problem-quadratic-equations/","timestamp":"2024-11-06T14:37:04Z","content_type":"text/html","content_length":"66984","record_id":"<urn:uuid:09fabfd4-466c-47ed-8433-011de7093b78>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00358.warc.gz"}
Program Costs -mw The total cost of the Automation Technician Certificate Program is $1850. There are two payment options. Option 1 -Full Registration: $1850 Students register and pay for the complete program at one time. Option 2 -Pay-As-You-Go Registration Initial registration is $590 (includes all learning materials, laboratory simulation software, user guides and Module 1) and registration for each of the remaining 17 can be purchased singly or in groups at a later date. The total cost of the Electronics Technician Certificate Program is $1800. There are two payment options. Option 1 -Full Registration: $1800 Students register and pay for the complete program at one time. Option 2 -Pay-As-You-Go Registration Initial registration is $430 (includes the Learning Package and Module 1) and registration for each of the remaining 23 modules is $60/module. Students may register for one or more modules at any The total cost of the Electromechanical Technician Certificate Program is $1800. There are two payment options. Option 1 -Full Registration: $1800 Students register and pay for the complete program at one time. Option 2 -Pay-As-You-Go Registration Initial registration is $430 (includes the Learning Package and Module 1) and registration for each of the remaining 23 modules is $60/module. Students may register for one or more modules at any The total cost of the Electric Vehicle Technician Certificate Program is $1750. There are two payment options. Option 1 - Full Registration: $1750 Students register and pay for the complete program at one time. Option 2 - Pay-As-You-Learn Registration Initial registration is $580 (includes all learning materials, laboratory simulation software and Module 1) and registration for each of the remaining 13 modules is $90/module. Students may register for one or more modules at any time. The total cost of the PLC or PLC Technician II Program is $1800. There are two payment options. Option 1 -Full Registration: $1800 Students register and pay for the complete program at one time. Option 2 -Pay-As-You-Go Registration Initial registration is $450 (includes the Learning Package and Module 1) and registration for each of the remaining 18 modules is $75/module. Students may register for one or more modules at any The total cost of the Robotics Technician Certificate Program is $1740. There are two payment options. Option 1 -Full Registration: $1740 Students register and pay for the complete program at one time. Option 2 -Pay-As-You-Learn Registration Initial registration is $570 (includes all learning materials, laboratory simulation software, user guides and Module 1) and registration for each of the remaining 13 modules is $90/module. Students may register for one or more modules at any time.
{"url":"https://mwcc-gbc.com/highlights/program-costs","timestamp":"2024-11-13T04:23:18Z","content_type":"text/html","content_length":"22963","record_id":"<urn:uuid:926898fe-46eb-4547-b595-070f0cf04c6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00148.warc.gz"}
An example LFSR Some time ago, we examined Linear Feedback Shift Registers (LFSR)s and particularly how to create the logic necessary to implement two different forms of an LFSR: a Fibonacci and a Galois form. Today, let’s go back to the Fibonacci form of a shift register and examine one particular set of coefficients, called TAPS in the code, to see what sort of sequence it produces. Fig 1: An example 5-stage LFSR In particular, let’s look at a 5-stage LFSR with the TAPS register given by 00101. You can see a picture of the logic required to implement this shift register in Fig 1. In this figure, you can see how the output, together with the value of the register two stages earlier, both get added (XOR‘d) together to produce the new MSB of the shift register. Even better, I picked this particular set of coefficients in order to guarantee that this shift register has a maximum length. For a register with five internal bits within it, bits that can never all be equal to zero, this maximum length is 2^5-1 or 31. Hence, this register has an output sequence of 31 pseudorandom bits. Finally, before we start working through the numbers, I’d like to note that Fig 1 looks very similar to the figure we presented earlier when we described how to build a generic shift register. That figure is shown below in Fig 2. Fig 2: The Generic Form of a Fibonacci LFSR Implementation The biggest difference you may notice between these two figures is that the multiplies have been removed. Those taps that were multiplied with zero in this example have been removed. Those taps that were multiplied by one have been replaced by a simple wire. That’s how multiplication is defined, and how it actually takes place within GF(2). Even better, all of this multiplication logic takes place as the LFSR logic is being synthesized–so that what is actually implemented ends up being identical to Fig 1 above. My hope today is that, by specifically stating what the coefficients of an example LFSR are, we might be able to examine and understand how an LFSR works. Further, as an aside, I’ve seen a lot of examples of how a 3-stage LFSR works in text books (TAPS=3'b011). I wanted this presentation to be different enough to generate something barely non-trivial, and so this example produces a longer sequence. Feel free to let me know if you found this easier to understand. Working through the states Fig 3: Example LFSR States Let’s assume that our example starts with an INITIAL_FILL of one–just like the implementation we presented earlier. At each step, the LFSR works by shifting every bit to the right by one, and then calculating the top bit. In our case, that top bit is set by the sum (XOR) of bits 0 and 2. You can see the set of states that this produces in Fig 3 on the left. If you follow this formula, you’ll see that the 00001 state is followed by the 10000 state: the new top bit is set by the sum (XOR) of 0 and 1–resulting in 1. Since there are no ones in bit positions 0 or 2 for a couple of clock periods, the shift register just shifts to the right uneventfully until it gets to 00100–the next time there’s a bit in position 2. The state after 00100 is 10010, since the sum of 1 (position 2) and 0 (position 0) is one and that goes into the top bit while the other bits shift down. One state later, the register equals 01001 and now there’s a bit in position 0, so the state following has a 1 in the MSB. We can follow this logic down to 01101. At this state, instead of adding 0+1 or 1+0 and getting a one as the result, we now have 1+1. As you may recall, this addition is done over GF(2). It is equivalent to an exclusive or, and so the new MSB is now 0. By this point in time, you should just about have the hang of it. If not, feel free to work through the states shown on the left and see if you can generate each of them. You may also notice that, after 31 states, the state returns to our initial state–hence our sequence is 31 bits long. As you transition through all of these states, remember that the LSB is the output of this pseudorandom number generator. Hence, you should be able to read down the column on the far right of Fig 3 on our left and read out the pseudorandom numbers that are being produced. Even better, should you wish to adjust where in this sequence you wish to start, all you need to do is change the INITIAL_FILL. For that matter, if you sort the INITIAL_FILL values, you’ll find that every value but zero gets used–so the register state just determines where you are within the sequence. Now let’s turn our attention to look at some randomness properties of this output sequence. First, let’s count how many 1’s this sequence produced: 16. That means that the sequence has (almost) a probability of 1/2 for producing a one. How about runs? How many runs of 11 or 00 does this sequence produce? 8 and 7 respectively. This is close to the probability of 1/4 that you’d want. What about runs of three? How many times do you find three 1’s in a row, or three 0’s in a row? 4 and 3 times respectively. This follows the pattern, and nearly matches the probability of 1/8th that we would Judging from these observations, the sequence certainly looks random. Indeed, it is random enough for most signal processing purposes. It’s just not random enough for cryptography–but I’m really not enough of a cryptographic expert to comment further on what it takes to create true cryptographically random numbers. Hopefully running through this example has helped to demystify LFSRs for you. Because they are so easy to implement, their logic maps so nicely to just a couple of transistors, and because their results look random, they have a very important part to play in Digital Signal Processing (DSP). My intent, however, is to create a module that can output these pseudorandom values at 950Mbps–or whatever I can get from my FPGA to handle at its fastest speed. To get there, I’m still going to need to create a LFSR implementation that can produce multiple output bits per clock. We’ll have to come back to this topic again, therefore, in order to examine and explain how to do this. Even as Sodom and Gomorrha, and the cities about them in like manner, giving themselves over to fornication, and going after strange flesh, are set forth for an example, suffering the vengeance of eternal fire. (Jude 1:7)
{"url":"http://zipcpu.com/dsp/2017/11/11/lfsr-example.html","timestamp":"2024-11-04T04:36:25Z","content_type":"text/html","content_length":"19926","record_id":"<urn:uuid:7ccf5718-7b70-430d-8165-3260503f4595>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00830.warc.gz"}
Achievement: 100k Bineroo grids completed And what does the data say? Bineroo Logo For those who already know what Bineroo is, you can skip the introduction and directly dive into the exciting part. Bineroo is a mobile game I have been developing for the past couple of years. It’s built using the Flutter framework. It is available on both Google Playstore and Apple Appstore PlayStore: https://play.google.com/store/apps/details?id=com.dargil.bineroo AppStore: https://apps.apple.com/us/app/id1457498721 NB: I am working on web and desktop versions. That’s the cool part with Flutter. So stay tuned. I won’t lie: it is an adaptation of an existing game known as Binero, Takuzu, or Binairo depending on the country. I never claimed the paternity of the game concept. I tried to revisit it and give it a new perspective by adding nice new features. Basically, it is a binary puzzle where you need to fill grids with black and white circles by respecting 3 rules: Rule 1: Equal number of blacks and whites in each column and row Rule 2: No more than 2 adjacent circles of the same color Rule 3: Each row is unique. Each column is unique Here is the trailer for the v1.7 release. This gives a pretty good overview of the gameplay. 🔢 Custom metrics I have been collecting data about game usage nearly since the beginning. It allows me to monitor the habits of the players and the adoption of newly released features. In addition to all the basic metrics provided by the analytic platform (DAU, MAU …), there are a few other ones that I watch closely: • The number of played grids per day is a good indicator of the global game activity • The percentage of completed grids per day is interesting to monitor to make sure we didn’t introduce a new feature that makes the game neither too difficult (=frustrating) nor too easy (=boring). • The average numbers of played and completed grids per active user are indicators of the players’ engagement 🎉 The 100k milestone I am proud to announce today that the game hit a major milestone with 100k grids successfully completed by players. This is a pretty big achievement that gives me the motivation to keep going in this direction. Thanks a lot to all players ❤️ 🚀 Next: the 1M milestone Now the big question is when will the game hit the million grids completed milestone? I hope not in ten years 😆 Let’s start by looking at the evolution over time. The graph shows a real acceleration over time which is very good news as it means more and more engagement (more players and/or more grids completed per player). It also means that it will take less time to complete the next 100k grids. But how much time exactly? This is where it really gets funny and geeky. Google spreadsheet provides the possibility to show trendlines for a data series plotted on a graph. The default configuration is a linear trendline but there is a bunch of other possibilities (exponential, polynomial, logarithmic …). In our case, the best type is polynomial. You can even choose the polynomial degree (up to 10) to stick as close as possible to the raw data. Linear Trendline Polynomial degree 3 trendline The equation can be displayed as the trendline label on the graph. So in our case, if we use a 3-degree polynomial type (which is good enough), here is the equation: Once we have the equation, we can use it to calculate future milestones. You can do it by hand if you have the time and energy to do so but I am a bit lazy so I delegated this tedious task to an online calculator 😎 I just googled “solve polynomials equation online” and picked the first result https://www.symbolab.com/solver/polynomial-equation-calculator Now we can have the answers to our questions. While it took a bit more than 2 years to get to the first 100k completed grids, the next 100k completed grids should be reached in roughly 251 days. So let’s set a reminder for Sunday, August 6th, 2023 to see where we will be with this milestone 📆 And now the answer to the big question “when will the game hit the million completed grids milestone?” Drum rolls … It will take 3 more years 🎉 The exact date is Wednesday, 17 December 2025 Ok, this is not 10 years but still it is too far. I want to make it in a year. So I need to come up with a plan. What about a community challenge? And you, can you help me? do you have any ideas to propose? I am all ears.
{"url":"https://dargil.medium.com/achievement-100k-bineroo-grids-completed-86e56c881a86?responsesOpen=true&sortBy=REVERSE_CHRON&source=user_profile---------5----------------------------","timestamp":"2024-11-10T08:48:12Z","content_type":"text/html","content_length":"129332","record_id":"<urn:uuid:b52a0811-b2e1-4c97-8db7-1845c7735c9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00707.warc.gz"}
Dear Readers I want ya'll to check this out. somalian mens are so lazy and dumb ass hell.The reason am saying this is because they dont do what they suppose to do.Only thing they know is how to chew qaad and talk about qabiils, They cant even cook their own food.When you talk to somalian dude and ask why You didn't learn how to cook your own food he gonna said that is ladies jobs and not mens jobs.My day only thing he know how to cook is shaah and the reason he learn shaah is because he can chew his qaad with it.hey readers I know this is joke room,but this topic is not funny to me its sad shawty I hope we change. check this out all the singal ladies I hope yall find the right guy on future who at least can cook his own food.that will be something.us me am still young but,when my time come I will gonna do better then my dad.I and love my dad this-is just example. am out for good peace and 1 love yall.project is bounce. Hey dawg jamaal am not disrespecting am just telling the true so if u dont like the true move on shawty or if u got problem holla and explain ur comment aight. 1love all peace. Atl Project If you think all somalimen are lazzy, are u now or will be one of them. OR is this another way of explaining to us the living style of your male relatives.? or are you shemale confused about his/her identity ?? Hey lefty I didn say all the somalian are lazy but,some of them are.and the reason I write this topic is because I want my generation to be a perfect aight.so dont give me wrong brother if yall dont like I will not write topic like this again and I like the way u explain ur comment too .peace shawty. I agree with u ATL-PROJECT-BOY with every comment u made. I was just wondering u're age because u got me curious. Jamaal-11 u need u're priorities checked because a boy has better knowledge about the time we live in, its nt like it was back in the day where men hunted for food and the woman stayed at home and took care of the babies. If u are intending on getting an intelligent woman u need to get u'te facts straight. U're the one that should be called a (BOY) because u sound narrow minded. By the way this is from a 17 year old female, that should be letting u know that our generation is informed about our society better. Atl fella is this a way of advertising ya lame a**(u can cook n sh*it), nigga i aint intimidating ya but u know wha, u dont need 2 change no generation. The most pathetic thing u need 2 change is ur grammer coz mos def it needs some boost. Otherwise this is jokez side, so all u gotta do is move it 2 its right place. Atl fella is this a way of advertising ya lame a** DAMN..........Ill keep my comments to myself Jamaal I agree with u Bro. Since when do gron ass man take advice from Lil boys Like ur Self. You can be saying all somali men Chew Qaat and what ever. think carefully before u post Such Msg's LIL Bro. Kol hadaan La tababarin Aya Baraya Aya baraya ATL i'm glad to see a young brotha noticing and distinguishing that men (not somalian only) need to take care of themselve in order to benefit his wife/kids. But dont generalize all the somalian brotha's cuz believe it or not i've seen my share goods of great guys who cook,actractive,respect and honor their wife's wish.And no one is perfect so get that out of you're head..but least you can do is just give it a try and try to reason your thoughts cuz not everyone will get the meaning you're sayin. Just like that song said..."you wanna wear the pant guess what.gotta be a men sur'e lil boys all ways jump to conclutions, so plz dont be affended by what he said but build him so he woudnt say sh@# like that anymore, cause in the real world he may get heart by some ppl, but anyways everybody has balwad so if it is qat so be it, it our culture and i luv it.. Walahi to be frank i agree with ATL-Project-Boy. Nowaday you dont see somalian men cooking food or doing house chores. He depends on his wife to do everything while all they do is eat qaad and sip tea. It would be better if somalian men changed and atleast helped their wives around the house and learn a thing or two. It wouldn't hurt if somalian men changed. May Allah help all those somalian brothaz out there.
{"url":"https://www.somaliaonline.com/community/topic/6824-somalian-mens-need-to-change-this/?tab=comments#comment-110920","timestamp":"2024-11-07T08:56:45Z","content_type":"text/html","content_length":"261937","record_id":"<urn:uuid:8a84d373-bdb4-4693-b20a-a6423ea20903>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00284.warc.gz"}
Consecutively numbers - math word problem (81634) Consecutively numbers How many ways are there to arrange the numbers 3, 2, 15, 8, and 6 so that the even numbers are arranged in ascending order (not necessarily consecutively)? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators You need to know the following knowledge to solve this word math problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/81634","timestamp":"2024-11-04T05:43:29Z","content_type":"text/html","content_length":"55976","record_id":"<urn:uuid:43e340bf-a3c8-44df-88d7-609fc6b73019>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00015.warc.gz"}
© 2005 Anna Bigatti GNU Free Documentation License, Version 1.2 CoCoALib Documentation Index User documentation for ModuleTermOrdering An object of the class ModuleTermOrdering represents an ordering on the module monoid of module terms, i.e. such that the ordering respects the operation .... In CoCoALib orderings and gradings are intimately linked (for gradings see also degree and PPOrdering). Currently, the most typical use for a ModuleTermOrdering object is as a constructor argument to a concrete FreeModule. At the moment there are ? functions which create new ModuleTermOrderings: Pseudo-constructors: (where PPO is a PPOrdering, shifts is a vector<degree>, perm is std::vector<long>, NumComponents is a long) NewWDegTOPos(PPO, NumComponents); NewPosWDegTO(PPO, NumComponents); NewWDegPosTO(PPO, NumComponents); NewWDegTOPos(PPO, shifts); NewWDegPosTO(PPO, shifts); NewPosWDegTO(PPO, shifts); NewWDegTOPos(PPO, perm); NewWDegPosTO(PPO, perm); NewWDegTOPos(PPO, shifts, perm); NewWDegPosTO(PPO, shifts, perm); WDeg is the degree (incl. the shifts) TO is the PPOrdering (incl. the degree, i.e. the first GrDim rows) Pos is the position (according to the "score" given by perm [NYI]) P = Q[x,y] with StdDegLex (==> GradingDim = 1) P(-2) (+) P(-1) i.e. P^2 with shifts = [(2), (1)], and WDegTOPos v1 = [x,0], v2 = [0,y^2]: WDeg(v1) = WDeg(x)+2 = 3, WDeg(v2) = WDeg(y^2)+1 = 3 x < y^2 according to StdDegLex (NB: not "Lex"!) so v1 < v2 The operations on a ModuleTermOrdering object are: out << MTO; // output the MTO object to channel out const std::vector<degree>& shifts(const ModuleTermOrdering& O); long NumComponents(const ModuleTermOrdering& MTO); long GradingDim(const ModuleTermOrdering& MTO); const PPOrdering& ModPPOrdering(const ModuleTermOrdering& MTO); bool IsWDegTOPos(const ModuleTermOrdering& MTO);// true iff MTO is implemented as WDegTOPos bool IsPosWDegTO(const ModuleTermOrdering& MTO); bool IsWDegPosTO(const ModuleTermOrdering& MTO); output and OpenMath output is still questionable. Maintainer documentation for ModuleTermOrdering The general ideas behind the implementations of ModuleTermOrdering and ModuleTermOrderingBase are analogous to those used for ring and RingBase. ModuleTermOrdering is a simple reference counting smart-pointer class, while ModuleTermOrderingBase hosts the intrusive reference count (so that every concrete derived class will inherit it). See The only remaining observation to make about the simple class ModuleTermOrdering is that I have chosen to disable assignment -- I find it hard to imagine when it could be useful to be able to assign ModuleTermOrderings, and suspect that allowing assignment is more likely to lead to confusion and poor programming style. There are ? concrete ModuleTermOrderings in the namespace CoCoA::MTO. The implementations are all simple and straightforward except for the matrix ordering which is a little longer and messier but still easy enough to follow. See also the CoCoAReport "Free Modules". Bugs, shortcomings and other ideas do we need a class "shifts"?
{"url":"http://cocoa.altervista.org/cocoalib/doc/html/ModuleTermOrdering.html","timestamp":"2024-11-02T15:37:29Z","content_type":"text/html","content_length":"5351","record_id":"<urn:uuid:933d4acf-be2b-4388-8cec-079ec6b550e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00103.warc.gz"}
32nd Annual European Symposium on Algorithms (ESA 2024) Cite as Sebastian Angrick, Ben Bals, Tobias Friedrich, Hans Gawendowicz, Niko Hastrich, Nicolas Klodt, Pascal Lenzner, Jonas Schmidt, George Skretas, and Armin Wells. How to Reduce Temporal Cliques to Find Sparse Spanners. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024) Copy BibTex To Clipboard author = {Angrick, Sebastian and Bals, Ben and Friedrich, Tobias and Gawendowicz, Hans and Hastrich, Niko and Klodt, Nicolas and Lenzner, Pascal and Schmidt, Jonas and Skretas, George and Wells, Armin}, title = {{How to Reduce Temporal Cliques to Find Sparse Spanners}}, booktitle = {32nd Annual European Symposium on Algorithms (ESA 2024)}, pages = {11:1--11:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-338-6}, ISSN = {1868-8969}, year = {2024}, volume = {308}, editor = {Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.11}, URN = {urn:nbn:de:0030-drops-210822}, doi = {10.4230/LIPIcs.ESA.2024.11}, annote = {Keywords: Temporal Graphs, temporal Clique, temporal Spanner, Reachability, Graph Connectivity, Graph Sparsification}
{"url":"https://drops.dagstuhl.de/entities/volume/LIPIcs-volume-308","timestamp":"2024-11-12T17:33:35Z","content_type":"text/html","content_length":"1049820","record_id":"<urn:uuid:1579ba9f-9f4d-483f-b54d-0061882d34d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00712.warc.gz"}
ECCC - Reports tagged with Worst case to average case reductions Reports tagged with Worst case to average case reductions: TR18-056 | 20th March 2018 Zvika Brakerski, Vadim Lyubashevsky, Vinod Vaikuntanathan, Daniel Wichs Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The ... more >>> TR20-095 | 24th June 2020 Mikito Nanashima On Basing Auxiliary-Input Cryptography on NP-hardness via Nonadaptive Black-Box Reductions Revisions: 1 A black-box (BB) reduction is a central proof technique in theoretical computer science. However, the limitations on BB reductions have been revealed for several decades, and the series of previous work gives strong evidence that we should avoid a nonadaptive BB reduction to base cryptography on NP-hardness (e.g., Akavia et ... more >>> TR21-010 | 11th February 2021 Eric Allender, John Gouwar, Shuichi Hirahara, Caleb Robelle Cryptographic Hardness under Projections for Time-Bounded Kolmogorov Complexity Revisions: 2 A version of time-bounded Kolmogorov complexity, denoted KT, has received attention in the past several years, due to its close connection to circuit complexity and to the Minimum Circuit Size Problem MCSP. Essentially all results about the complexity of MCSP hold also for MKTP (the problem of computing the KT ... more >>> TR22-007 | 14th January 2022 Halley Goldberg, Valentine Kabanets A Simpler Proof of the Worst-Case to Average-Case Reduction for Polynomial Hierarchy via Symmetry of Information We give a simplified proof of Hirahara's STOC'21 result showing that $DistPH \subseteq AvgP$ would imply $PH \subseteq DTIME[2^{O(n/\log n)}]$. The argument relies on a proof of the new result: Symmetry of Information for time-bounded Kolmogorov complexity under the assumption that $NP$ is easy on average, which is interesting in ... more >>> TR22-020 | 18th February 2022 Vahid Reza Asadi, Alexander Golovnev, Tom Gur, Igor Shinkar Worst-Case to Average-Case Reductions via Additive Combinatorics We present a new framework for designing worst-case to average-case reductions. For a large class of problems, it provides an explicit transformation of algorithms running in time $T$ that are only correct on a small (subconstant) fraction of their inputs into algorithms running in time $\widetilde{O}(T)$ that are correct on ... more >>>
{"url":"https://eccc.weizmann.ac.il/keyword/19598/","timestamp":"2024-11-12T14:06:18Z","content_type":"application/xhtml+xml","content_length":"22213","record_id":"<urn:uuid:dc94cdce-455f-4b46-9dd1-6d954d603956>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00344.warc.gz"}
Which Graph for Which Type of Data? Data plays a crucial role in statistics. In layman terms, it’s all the information you gather to increase your knowledge, reach a conclusion, and consider a hypothesis. Variables come in different types, each showing certain information. Information’s type indicates what you can pick up from it, and also, what you cannot derive from it. Therefore, it’s important to understand the types of data. Qualitative and Quantitative Data Before moving ahead into the types of data, you should know the difference between qualitative and quantitative data. Qualitative data refers to information indicating properties that aren’t measurable through numbers. This information is mostly subjective. For instance, marital status, architectural style, eye color, and taste are all qualitative data. On the other hand, quantitative data refers to information documented in the form of numbers. It is used to represent a count or an objective measurement. For instance, body mass index (BMI), and temperature are examples of quantitative data. This type of data is also known as numerical data. Types of Quantitative Data When you want to represent your data with numbers, it can be done by the collection of quantitative data. This form of data is further split into two categories. Continuous Data Continuous variables can break down numerical values; it can be contextually split into smaller increments, including decimal and fractional values. There are countless possible numeric values between two values. Generally, a scale is used to measure continuous variables. For instance, temperature and height can be measured through continuous data. Continuous variables also help you with evaluating properties like standard deviation, range, distribution, median, and mean. A histogram represents the distribution of the values. This is why they are an effective method to graph continuous variables. With histograms, you can figure out skewed or symmetric values, find the most common values, and comprehend the range of values. You can use a scatterplot to graph two continuous variables. Regression analysis can help with calculating a line’s equation. Similarly, you can utilize correlation to determine a relationship’s If you have continuous variables that split into groups, utilize a boxplot to represent the spread and central tendency of each group. Discrete Data Discrete quantitative data are a total of the occupancy of an activity, item, result, or a property. You cannot split them into smaller increments. For instance, you can have one or two smartphones, but you cannot have 1.5 smartphones. It represents a finite count of possible values that can be recorded in an observation. Discrete variables can help you with calculating and determining a count’s summary, such as the standard deviation, sum, and mean. Bar charts are an effective method of graphing discrete variables. You can show distinct values with every, whereas the height shows its proportion to the complete sample. Qualitative Data: Ordinal, Binary, and Categorical When you note information with the purpose of categorizing your observations, you collect qualitative data. There are three forms of qualitative variables: ordinal, binary, and categorical. Pie charts and bar charts are traditional tools for graphing qualitative variables, as they are productive for showing the relative percentage of all the groups from the complete sample. If there are scenarios where you an option to record property as a qualitative or continuous variable, the most effective method is to document the continuous data as you can understand more. Ordinal Data Ordinal is one where you have three or more categories where each of them is marked by specific order. For instance, you can use it for product review with ‘bad’ to ‘excellent’ rating. Some experts believe that ordinal variables are made from a combination of quantitative and qualitative characteristics. For example, the Likert-scale is often used to measure satisfaction on a 1-5 scale. You can represent ordinal data through bar graphs. Binary Data As the name suggests, binary data is made of two values. This means that if your observation can be analyzed in two categories, then you are working with binary variables. Experts call these variables as both indicator and dichotomous variables. For instance, the result of a course has binary data as it can be either pass or fail. Binary variables are great for calculating a percentage or proportion. With the pie chart, you can represent the binary yes/no data in the form of continuous data. Categorical Data Categorical data consists of values that can be included in a countable number of unique groups according to a property. Categorical variables can be used to set categories in cases where they don’t have any natural order. Analysts call categorical variables as both nominal and attribute variables. For instance, you can select an M.S. program in any of these fields: Computer Science, Software Engineering, Statistics, and Data Science. Pie charts can be used to represent categorical data. Final Thoughts Now that you have an understanding of various types of data, what can be learned from them, and how you can graph them, let’s go ahead and apply this knowledge to your current processes. To improve the quality of your work, you can consider using Image-Charts for creating a wide range of charts.
{"url":"https://www.image-charts.com/blog/which-graph-for-which-type-of-data","timestamp":"2024-11-15T04:44:49Z","content_type":"text/html","content_length":"43419","record_id":"<urn:uuid:18ca2428-7bf2-4d3c-8c47-16375700e05d>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00111.warc.gz"}
Mandelbrot Set Python First let me get out the way the glaring problem of Python being a terrible language to make a Mandelbrot Set generator in. Mandelbrot Generators need speed to make the best possible images with the highest resolutions. Python, compared to a language such as C – which would be a much better language to program this project – is incredibly, incredibly slow to run. This is due to multiple things, mainly Python being an interpreted language meaning each line of code is compiled line as the program is executed, while a compiled language like C is converted into a binary file before runtime. The reason I used Python is because 1. I am a million times more competent with Python than any other programming language, and 2. Sufficient quality images can be acquired in Python in a relatively reasonable amount of time; good enough for this little experiment. The Mandelbrot Set The Mandelbrot Set is a set of numbers. A list of numbers which satisfy one rule. First, we must take the entire set of numbers, which in this case is a small area on the complex plain, with real numbers going along the x axis and imaginary numbers going up the y axis. If you throw a dart onto the complex plain, the position that dart landed can be described by taking it’s position on the x dimension and y dimension and combining them together, this is a complex number; a number combining a real part and an imaginary part. A Mandelbrot Set generator takes every position in this plain (every complex number), with certain bounds and a set level of fidelity, and applies this formula to it. What this does is takes the number 0, square it, and add the current complex number to it. Then it does this again, but instead of Z being 0, Z is now the number which the last calculation created. This is repeated many times and this iteration will do one of two things. First, and true for basically all complex numbers save a few around 1 to 2 x or y on the plain the iteration will blow up as it is squared over and over again, so it will go off to infinity. However, due to the nature of mathematics, there are some numbers which do not spiral off into infinity, instead doing something far more remarkable. These numbers make a series of calculations which turn into a loop, they stabilise out into a repeated pattern of numbers that never expand to infinity. Complex numbers which do this are in the Mandelbrot Set. So, if you repeat this process over and over again for many complex numbers you create a group of complex numbers which are in the set. But a list of numbers is not very interesting. The magic happens when you give the numbers a colour, most commonly black, and place them back onto the complex plain: The Mandelbrot Set A fractal: a beautiful shape with infinite complexity buried in the reality of the universe. In addition to simply colouring the numbers that are in the set black, pixels can also be coloured by taking the number of iterations it took the program before the complex number began to increase to infinity, essentially a measurement of how “stable” that position is. By simply subtracting the red and green values of the pixel by the iteration count multiplied by some arbitrary constant to give it more of an effect, a blue colouration can be given to the space around the set: Going even further than this, a spectrum of colours can be mapped to the number of iterations, creating some brilliantly colourful sets. I have not added this feature to my code yet The Mandelbrot set is just one of many fractals which can be made this way, playing around with the formula can yield some incredible results. When you increase the power, the number of “bulbs” increases.
{"url":"https://henrytechblog.com/project/programming/mandelbrot-set-python/","timestamp":"2024-11-11T10:41:38Z","content_type":"text/html","content_length":"59453","record_id":"<urn:uuid:58676359-8cf9-4bba-b2bd-2e09bb06aca3>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00533.warc.gz"}
A self-paced course that explores the fundamentals of algebra, including exponents and radicals, ratio and proportion, and linear equations using. Because algebra is such a broad subject, there are endless classes on the subject available at various levels. When it comes to algebra 1 online courses. Learn algebra through Sophia's scenario-based activities designed around real world situations that will help you apply this knowledge in both personal and. Are there any ready made basic algebra courses available online? Or any singular text books I could do this with easily enough? Excel High School offers a two-semester, accredited high school Algebra 1 class completely online. Learn at your own pace and graduate online! In this Live Online Course students will gain an important foundation for all mathematics to follow. They will learn factoring; rational expressions;. Free learning on Udemy · Algebra Boot Camp - Master The Fundamentals of Algebra! If you hate Math, or think you can't do it, take this course! · Math for Middle. Module 1: Algebra Basics. Algebraic Expressions · Module 2: Linear Functions · Module 3: Exponential Functions · Module 4: Systems of Equations and Inequalities. 10 Algebra Courses. Advanced Matrix Theory and Linear Algebra. Indian Institute of Science. Bangalore. Course Outline: Introduction, Vector Spaces, Solutions. These free online Algebra courses will teach you everything you need to know about Algebra. Learn Algebra, earn certificates with paid and free online courses from Harvard, MIT, University of Michigan and other top universities around the world. Enroll in college algebra online and earn credit. This online college algebra course is offered as an on-demand or week session-based class. This course presents traditional concepts in college algebra. Topics include linear, polynomial, rational, radical, exponential and logarithmic functions. Online math courses in geometry, algebra, basic math, calculus and statistics for adult learners, highschool and college students. This course is intended for students looking to create a solid algebraic foundation of fundamental mathematical concepts from which to take more advanced. Master the fundamentals of Algebra 1 with engaging online classes designed for kids and teens. Explore a variety of courses tailored to various skill. Algebra 1 is the second math course in high school and will guide you through among other things expressions, systems of equations, functions, real numbers. Unit 1Unit 1: Algebra foundations · Unit 2Unit 2: Solving equations & inequalities · Unit 3Unit 3: Working with units · Unit 4Unit 4: Linear equations & graphs. Thinkwell Algebra 1 online math course includes detailed minute instructional videos (no textbook required), automatically graded math exercises. Part 1 covers all of the concepts crucial to success in subsequent classes, including the foundations of Algebra, equations, inequalities, linear and non-linear. Learn about algebra from top-rated math teachers. Whether you're interested in learning basic pre-algebra skills, or algebra I and II, including logic gates. UND's college algebra online course covers polynomial and rational, functions, inverse functions, simple conics, systems of equations and determinants. This free course is an introduction to algebra which builds on the idea of using letters to represent numbers. Section 1 looks at finding, simplifying and. Math planet is a free, accessible platform for learning mathematics. We offer high school math courses in Pre-algebra, Algebra 1, Algebra 2 and Geometry. This online math course integrates mathematics, specifically algebra with many other areas of study, including history, biology, and geography. You will. Algebra courses are available for free online by top universities, including the University of California at Irvine, the Massachusetts Institute of. Through introductory and advanced online algebra courses delivered by edX, you can learn basic algebra and abstract algebra. An introductory course in algebra. StraighterLine's online College Algebra course can help you earn credits to fulfill your college math prerequisites. Enroll today. Earn college credit by taking the Free Online Algebra Course at Christian Leaders Institute. No obligation to enroll at CLI. Start Course challenge · Community questions. Our mission is to provide a free, world-class education to anyone, anywhere. Khan Academy is a (c)(3. This self-paced course offers you a simple way to get ahead on your degree, and you'll learn from expert math instructors who break down algebra topics in a. Algebra 1 is a course offered by iLEAD Online Charter School and provides the tools to lead learners into the study of Geometry and Algebra 2. Free Mathematics Courses · Case Studies in Functional Genomics · Introduction to Bioconductor · Advanced Bioconductor · High-Dimensional Data Analysis · High-. Pre-Algebra Full Course How To Get Rich With 500 Dollars | Bmn Stock
{"url":"https://ilishmayak.ru/news/online-algebra-course-for-adults.php","timestamp":"2024-11-13T11:31:38Z","content_type":"text/html","content_length":"11718","record_id":"<urn:uuid:35678ede-7a7a-4789-b527-3fd3283680a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00148.warc.gz"}
elgamal cryptography python The security of the ElGamal encryption scheme is based on the computational Diffie-Hellman problem ().Given a cyclic group, a generator g, and two integers a and b, it is difficult to find the element \(g^{ab}\) when only \(g^a\) and \(g^b\) are known, and not a and b.. As before, the group is the largest multiplicative sub-group of the integers modulo p, with p prime. In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. Elgamal Encryption Algorithm has three parts. It can be considered as the asymmetric algorithm where the encryption and decryption happen by the use of public and private keys. Documentation. Intended Use: This program was created as an exercise in cryptography in one of my classes at the University of Kentucky. Algorithm collections (RSA,DES,Elgamal) apply encryption and hash Algorithm of RSA, DES and MD5 etc. $14.99. ElGamal encryption is an public-key cryptosystem. Source Code can be found at github here. We explore Elgamal encryption using Elliptic curves and … I later turned it into a module. This cryptosystem is based on the difficulty of finding discrete logarithm in a cyclic group that is even if we know g a and g k, it is extremely difficult to compute g ak. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of PGP, and other cryptosystems. Python Cryptography. Fully homomorphic encryption (over addition and multiplication) library in python 0 Is there a way to confirm that a homomorphic division (multiplication with inverse) using ElGamal … So here’s an overview of ElGamal … sends $t$, and $z$,alongside $c = E_k(m)$. May the curve be with you Curve configuration. Learn more. ElGamal encryption in Python. Here, I will include the introduction, uses, algorithm, and code in Python for Elgamal Encryption Algorithm. $24.99. Work fast with our official CLI. Encryption algorithm¶. pip install PyCryptoDomex. It uses asymmetric key encryption for communicating between two parties and encrypting the message. 3.6. We will create a python implementation of this concept. m = b'Text'. Unlike symmetric key cryptography, we do not find historical use of public-key cryptography. I do not recommend you use it to protect any sensitive information. The Digital Signature Algorithm is a variant of the ElGamal signature scheme, which should not be confused with ElGamal encryption. This asymmetric-key encryption cryptography is on the basis of the difficulty of finding discrete logarithm in a cyclic group that means we know g^a and g^k, computes g^ak. C# (CSharp) Security.Cryptography.ElGamal.ElGamalManaged - 2 examples found. With the spread of more unsecure computer networks in last few decades, a genuine need was felt to use cryptography at larger scale. Namely, during encryption, there are these two exponentiations in the group G. Exponentiation, remember is a cubic time algorithm using the repeated squaring algorithm. 3.7. These are the top rated real world C# (CSharp) examples of Security.Cryptography.ElGamal.ElGamalManaged extracted from open source projects. Elgamal Elgamal 目录概述基本原理密钥生成加密解密难点 2015 MMA CTF Alicegame 2018 Code Blue lagalem 参考 ECC Lattice-based Cryptography Lattice-based Cryptography Lattice Overview Introduction to Lattices Lattice-based Algorithm CVP Elgamal Encryption is a type of asymmetric key algorithm used for encryption. Public Parameter: A trusted third party publishes a large prime number p and a generator g. In this algorithm, someone can know your message only when he/she knows the value of a. It was described by Taher Elgamal in 1985. ElGamal cryptosystem can be defined as the cryptography algorithm that uses the public and private key concept to secure the communication occurring between two systems. GitHub Gist: instantly share code, notes, and snippets. In this post, I would like to share the details of my implementation of a Feistel cipher using a 64 bit block size and a 64 bit key using Python 3 . Conversion from Python objects to SymPy objects; Optional implicit multiplication and function application parsing; Limited Mathematica and Maxima parsing: example on SymPy Live; Custom parsing transformations; Printing. ElGamal encryption is an public-key cryptosystem. It uses asymmetric key encryption for communicating between two parties and encrypting the message. It is used for public-key cryptography and is based on the Diffie-Hellman key exchange. It was proposed in 1984 and is also a double-key cryptosystem, which can be used for both encryption and digital signature. Python Cryptography Toolkit (pycrypto) This is a collection of both secure hash functions (such as SHA256 and RIPEMD160), and various encryption algorithms (AES, DES, RSA, ElGamal, etc.). Elgamal encryption using ECC can be described as analog of the Elgamal cryptosystem and uses Elliptic Curve arithmetic over a finite field. In this project, we visualize some very important aspects of ECC for its use in Cryptography. Completed on 2018-10-26. This idea is mainly based on ElGamal encryption schema and elliptic curves. It is a relatively new concept. Infact, the ElGamal encryption scheme can be viewed as simply comprising a D. Diffie-Hellman key exchange to determine a Python Cryptography Toolkit (pycrypto) This is a collection of both secure hash functions (such as SHA256 and RIPEMD160), and various encryption algorithms (AES, DES, RSA, ElGamal, etc.). This book is 100% complete. 'S PyCrypto module unsecure computer networks in last few decades, a and b specify the characteristic feature of ElGamal..., we visualize some very important aspects of ECC for its use in cryptography z. Encryption for communicating between two parties and encrypting the message at the University of Kentucky, distribution of key... Implemented in Python for ElGamal encryption is a Python cryptography was created as exercise. Encrypt and decrypt text using the web URL computer networks in last few decades, a and b the! Were involved in the classified communication Privacy Guard software, recent versions of PGP, and other cryptosystems for Studio! Problem in related to computing discrete logarithms ( see below ) of more unsecure computer networks in last few,! My classes at the performance of ElGamal cryptosystem and uses elliptic curve arithmetic over a finite field be as! For encryption proposed in 1984 and is based on the basis of the time intensive steps of ElGamal cryptosystem first. Computing discrete logarithms ( see below ) and decryption Tool https: //www.amazon.com/ Understanding-Cryptography-Textbook-Students-Practitioners/dp/3642041000/ now let look. Rated real world C # ( CSharp ) Security.Cryptography.ElGamal.ElGamalManaged - 2 examples found a double-key cryptosystem, which should be... The Diffie-Hellman key exchange to determine a Python module that provides cryptographic services created an. Elgamal-Encryption cryptography encryption decryption 18 commits 1 branch 0 packages 0 releases contributors. In encryption, digital signature encryption cryptography is on the Diffie-Hellman key exchange 0 releases Fetching contributors BSD-3-Clause Python certain. Financial corporations were involved in the free GNU Privacy Guard software, recent versions of PGP and... Big financial corporations were involved in the classified communication governments, military, and snippets for organizations such as,. Download Xcode and try again was felt to use cryptography at larger scale at. Share code, notes, and code in Python the, kind the! More unsecure computer networks in last few decades, a and b specify the characteristic of. Key by using public key cryptography, Diffie-Helman key exchange the digital signature and homomorphic cryptography:.. Python for ElGamal encryption in Python for ElGamal signature scheme, which should not be confused with ElGamal scheme... For its use in cryptography in one of my classes at the University of Kentucky visualize some very important of. Is based on the Diffie-Hellman key exchange to determine a Python module that provides cryptographic.. Two parties and encrypting the message decryption 18 commits 1 branch 0 packages 0 releases Fetching BSD-3-Clause. Python cryptography this asymmetric-key encryption cryptography is on the Diffie-Hellman key exchange considered as asymmetric... Is closely related to computing discrete logarithms ( see below ) of secret key by using public key,! Is the, kind of the ElGamal signature scheme, which can be over! The introduction, uses, algorithm, and big financial corporations were involved in the communication... Simple Python ElGamal encryption is used for encryption a sentence or an smaller... On the basis of the time intensive steps of ElGamal cryptosystem was first described Taher! In 1985 and is based on the basis of the time intensive steps of ElGamal defined over any cyclic.... The introduction, uses, algorithm, and other cryptosystems curve arithmetic over finite... Https: //malicious.link software, recent versions of PGP, and other cryptosystems c1, c2 is! In encryption, digital signature algorithm is a variant of the ElGamal cryptosystem and uses elliptic curve arithmetic over finite... Public key cryptosystem that is used in the free GNU Privacy Guard software, recent of! For encryption in 1984 and is based on the basis of the ElGamal cryptosystem and elliptic! Decryption 18 commits 1 branch 0 packages 0 releases Fetching contributors BSD-3-Clause Python and decryption Tool https:.! Text using the ElGamal cryptosystem was first described by Taher ElGamal in 1985 and is also a double-key,! Include the introduction, uses, algorithm, and code in Python an implementation a... Well suited for organizations such as governments, military, and snippets code... Following is an implementation of a Batch Screening system for ElGamal encryption is a public key cryptosystem that is in. This program was created as an exercise in cryptography to the Diffie-Hellman key exchange to a. Cryptosystem the ElGamal cryptosystem Security.Cryptography.ElGamal.ElGamalManaged extracted from open source projects cryptography at scale!, and code in Python b specify the characteristic feature of the difficulty of a Screening... Genuine need was felt to use cryptography at larger scale to computing discrete logarithms ( see below.. Symmetric cryptography was well suited for organizations such as governments, military, and code in for! Https: //www.amazon.com/Understanding-Cryptography-Textbook-Students-Practitioners/dp/3642041000/ encryption is used for encryption intended use: this program was created as an in! One of my classes at the performance of ElGamal encryption is a of. It to protect any sensitive information open source projects see below ) plaintext! $ z $, and big financial corporations were involved in the free GNU Privacy Guard software, recent of! The University of Kentucky + b an nice encryption module ezPyCrypto for Python 's PyCrypto.. Type of asymmetric key encryption for communicating between two parties and encrypting the message the... Y 2 = x 3 + ax + b to make adding new modules easy Man-in-the-Middle Attack an. Organizations such as governments, military, and code in Python for ElGamal encryption be... Symmetric cryptography was well suited for organizations such as governments, military, and code in for... To help us improve the quality of examples of more unsecure computer networks in few... Bsd-3-Clause Python its security depends upon the difficulty of finding discrete logarithm elgamal cryptography python a cyclic.... Visual Studio, https: //www.youtube.com /watch? v=tKNY1zhK3sQ, https: //malicious.link Python cryptography extension for Visual and! T $, and $ z $, and $ z $, alongside $ C E_k. Following is an implementation of this concept the classified communication Git or with. Fetching contributors BSD-3-Clause Python - 2 examples found 2 = x 3 + ax b! Logarithm in a cyclic group the basis of the ElGamal cryptosystem ElGamal 1985... For Visual Studio and try again = E_k ( m ) $ for... ) $ see below ) key, distribution of secret key by using public key cryptography, key! Decryption Tool https: //malicious.link nice encryption module ezPyCrypto for Python 's PyCrypto.! Improve the quality of examples involved in the classified communication between two parties and encrypting the.. Do not recommend you use it to protect any sensitive information finding discrete logarithm in a cyclic group ElGamal... Using the ElGamal signature scheme, which can be defined over any cyclic group ElGamal! The classified communication considered as the asymmetric algorithm where the encryption and digital signature and homomorphic cryptography is! # ( CSharp ) examples of Security.Cryptography.ElGamal.ElGamalManaged extracted from open source projects for public-key cryptography is! B specify the characteristic feature of the difficulty of a certain problem in to. Cryptography was well suited for organizations such as governments, military, and code in Python for ElGamal encryption a! Gist: instantly share code, notes, and code in Python for ElGamal signature,! The difficulty of finding discrete logarithm in a cyclic group … ElGamal encryption and decryption happen by use... In this project, we visualize some very important aspects of ECC for its use in.! Create a Python cryptography exchange, Man-in-the-Middle Attack open source projects encryption in Python to protect sensitive... Use of public and private keys of secret key by using public key, distribution of public key, of! Encrypting the message is sent to Alice by Bob encryption module ezPyCrypto for Python 's PyCrypto module of extracted. Described by Taher ElGamal in 1985 and is based on the basis of the ElGamal was! Secret key by using public key, distribution of public and private keys and is also a double-key cryptosystem which! It to protect any sensitive information, c2 ) is sent to Alice Bob! It uses asymmetric key encryption for communicating between two parties and encrypting the message of Kentucky defined over any group! Decades, a and b specify the characteristic feature of the ElGamal encryption Python! These elgamal cryptography python the top rated real world C # ( CSharp ) -! Of asymmetric key encryption for communicating between two parties and encrypting the message free! Key encryption for communicating between two parties and encrypting the message also a double-key cryptosystem, which be... Wrote is the, kind of the ElGamal cryptosystem and uses elliptic curve arithmetic over a finite.! By Taher ElGamal in 1985 and is based on the Diffie-Hellman key exchange private keys module provides! Cryptography was well suited for organizations such as governments, military, and other cryptosystems versions of PGP, big! A genuine need was felt elgamal cryptography python use cryptography at larger scale cryptosystem and uses elliptic curve arithmetic over finite! Github Desktop and try again algorithm, and $ z $, and $ z $, $! Sentence or an integer smaller than 280 aspects of ECC for its use in.! Code in Python discrete logarithms ( see below ) Diffie-Helman key exchange determine. Certain problem in related to the Diffie-Hellman key exchange by Taher ElGamal in 1985 and is closely related the! Of this concept PGP, and big financial corporations were involved in the classified.! Examples to help us improve the quality of examples this script was mostly used for encryption the rated. As the asymmetric algorithm where the encryption and decryption happen by the use of public and keys! Can be considered as the asymmetric algorithm where the encryption and decryption Tool https: //www.youtube.com/watch?,! Share code, notes, and code in Python key encryption for communicating between two parties encrypting... So, now let 's look at the performance of ElGamal cryptosystem the ElGamal cryptosystem uses...
{"url":"http://tlainc.com/best-supplements-uygoeh/ac0bea-elgamal-cryptography-python","timestamp":"2024-11-11T01:25:25Z","content_type":"text/html","content_length":"24393","record_id":"<urn:uuid:2ef05bbe-cf48-4655-87ec-33836515de3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00684.warc.gz"}
Science textbook solutions for Class 7 science Chapter 7 Motion, Force and Work - YB Study Science textbook solutions for Class 7 science Chapter 7 Motion, Force and Work Important points to remember 1. The minimum distance traversed in a particular direction along a straight line is called displacement. 2. The length of the route actually traversed by a moving body, irrespective of the direction, is called distance. Distance is a scalar quantity. 3. Velocity is the distance traversed by a body in a specific direction in unit time. 4. The unit of speed or velocity is written as metres/second (m/s). 5. The velocity at a particular moment of time is called instantaneous velocity. 6. The instantaneous velocity can be different at different times. 7. The interaction that brings about the acceleration is called force. 8. Force acts on a body. The scientist Sir Isaac Newton was the first to study force and the resulting acceleration. Question 1: Fill in the blanks with the proper words from the brackets. (stationary, zero, changing, constant, displacement, velocity, speed, acceleration, stationary but not zero, increases) a) If a body traverses a distance in direct proportion to the time, the speed of the body is constant. b) If a body is moving with a constant velocity its acceleration is zero. c) Speed is a scalar quantity. d) Velocity is the distance traversed by a body in a particular direction in unit time. Question 2: Observe the figure and answer the questions. Sachin and Sameer started on a motorbike from place A, took the turn at B, did a task at C, travelled by the route CD to D and then went on to E. Altogether, they took one hour for this journey. Find out the actual distance traversed by them and the displacement from A to E. From this, deduce their speed. What was their velocity from A to E in the direction AE? Can this velocity be called average Answer : Actual distance travelled by Sachin and Sameer from A to E = AB + BC + CD + DE = 3 + 4 + 5 + 3 = 15 km Displacement from A to E = AB + BD + DE = 3 + 3 + 3 = 9 km Question 3: From the groups B and C, choose the proper words, for each of the words in group A. Answer : 1. Work- Joule erg 2. Force – Newton – dyne 3. Displacement – Metre – cm Question 4: A bird sitting on a wire, flies, circles around and comes back to its perch. Explain the total distance it traversed during its flight and its eventual displacement. Answer : The total distance travelled by the bird during its flight = 2××(Distance between the point where the bird was sitting and the point from where it takes a turn) The eventual displacement of the bird is zero as it returns to its initial point i.e. where it was sitting. Question 5: Explain the following concepts in your own words with everyday examples : force, work, displacement, velocity, acceleration, distance. Answer : 1. Distance : The length of the route actually traversed by a moving body, irrespective of the direction, is called distance. Distance is a scalar quantity. 2. Displacement : The minimum distance traversed by a moving body in one direction from the original point to reach the final point, is called displacement. In displacement, both distance and direction are taken into account. Therefore, displacement is a vector quantity. The unit of measurement of distance and displacement is the metre, in the SI as well as in the MKS system of 3. Velocity : Velocity is the distance traversed by a body in a specific direction in unit time. The velocity of a body can be calculated by the following formula. 4. Acceleration (a) is the change in velocity (Δv) over the change in time (Δt), represented by the equation a = Δv/Δt. This allows you to measure how fast velocity changes in meters per second squared (m/s^2). Acceleration is also a vector quantity, so it includes both magnitude and direction. 5. Work : In physics, work is defined as a force causing the movement—or displacement—of an object. In the case of a constant force, work is the scalar product of the force acting on an object and the displacement caused by that force. 6. Force : A force is said to do work if, when acting, there is a displacement of the point of application in the direction of the force. Question 6: A ball is rolling from A to D on a flat and smooth surface. Its speed is 2 cm/s. On reaching B, it was pushed continuously up to C. On reaching D from C, its speed had become 4 cm/s. It took 2 seconds for it to go from B to C. What is the acceleration of the ball as it goes from B to C? Answer : The acceleration of the ball between A to B is zero as the speed and direction of the ball is constant. After point B, a force is applied. Thus, the ball will get accelerated. Acceleration of the ball from B to C = Change in velocity from B to C / Time taken for this change Acceleration of the ball from B to C = 4−22=1 m/s2 Question 7: Solve the following problems. a) A force of 1000 N was applied to stop a car that was moving with a constant velocity. The car stopped after moving through 10 m. How much is the work done? Answer : Work done by the force to stop the car=F×S=1000×10=10000 J b) A cart with mass 20 kg went 50 m in a straight line on a plain and smooth road when a force of 2 N was applied to it. How much work was done by the force? Answer : Work done by the force =F×S=2×50=100 J # Find out 1. What is meant by speed ? Answer : Speed is the distance traveled per unit of time. It is how fast an object is moving. Speed is the scalar quantity that is the magnitude of the velocity vector. It doesn’t have a direction. Higher speed means an object is moving faster 2. What is the formula for calculating speed ? Answer : To solve for speed or rate use the formula for speed, s = d/t which means speed equals distance divided by time. To solve for time use the formula for time, t = d/s which means time equals distance divided by speed. Leave a Reply Cancel reply
{"url":"https://ybstudy.com/science-textbook-solutions-for-class-7-science-chapter-7-motion-force-and-work/","timestamp":"2024-11-05T03:07:28Z","content_type":"text/html","content_length":"233637","record_id":"<urn:uuid:79c52a08-f7db-493d-ab34-96eb9b2d1dc0>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00598.warc.gz"}
Introduction to Sample size determination In an experiment, experimenter is interested in the effect of certain process, intervention or change (treatment) on targeted objects (experimental units). Sample size determination is to decide an appropriate sample size to achieve a desired probability that the clinical trial give statistically significant result, which is known as the power of test. Go to: (Terms in hypothesis test) (2 types of Experimental design) (3 types of test hypothesis) Procedures to identify the appropriate program to use 1. Identify the aspects you want There are 6 aspects: Means, Proportions, Survival Analysis, Phase II Clinical Trial, Confidential Interval and Others The test for correlation coefficient and standard normal calculator is provided in the “Others” category. 2. For Means, Proportions, i. Select the design you want: One sample design, Two samples parallel design or Two samples crossover design. ii. Select the test you want: Equality, Non-Inferiority / Superiority, or Equivalence. iii. Please kindly follow the detailed procedures on the calculator to work out the sample size. Examples are also provided as reference. iv. For two proportions, besides choosing from designs, You can also choose from Confidence Interval – Bristol and Compare Two Proportions – Casagrande, Pike & Smith Please kindly find the detailed guidance and explanation of terms via the links. 3. For Survival Analysis, i. Select the comparison you want: One survival curve or two survival curves ii. There are four choices: 1. Comparison of Survival Curves Using Historical Controls 2. Comparison of Two Survival Curves Allowing for Stratification 3. Comparison of Two Survival Curves – Rubinstein 4. Comparison of Two Survival Curves – Lachin iii. Please kindly follow the detailed procedures on the page to work out the sample size. Examples are also provided as reference. The detailed calculation and formulae are shown in the Formula section on the calculator page. For some terms of survival analysis appear on the calculator, please kindly find detailed explanation via the links. To know more about survival analysis and the common terms, please click: Survival Analysis 4. For Phase II Clinical Trials, i. Select the technique you want: 1. Fleming's Phase II Procedure 3. Simon’s Randomized Phase II Design ii. Please kindly follow the detailed procedures on the page to work out the sample size. Examples are also provided as reference. Detailed explanation to the principle, choice of sample size, hypotheses, decision boundaries and stopping criteria of the Phase II trial is shown in the ‘theory’ section on the calculator page. For some terms of Phase II Clinical Trials on the calculator, please kindly find detailed explanation via the links. To know more about Phase II Clinical Trials, please click: Phase II Clinical Trials For the difference between Fleming’s method and Bayesian method, please click: Difference between Fleming’s and Bayesian method 5. For Confidential Interval i. Select the technique you want: 3. Correlation 5. Relative Risk and Attributable Risk 6. Odds Ratios, ARR, RRR, NNT, PEER ii Please kindly follow the detailed procedures on the page to work out the sample size. Examples are also provided as reference. Formulae and the definition of terms are shown on the page also. 6. For Others, i. Select the technique you want: 1. Correlation Coefficient using z-transformation 2. Standard Normal Calculator ii. Please kindly follow the detailed procedures on the page to work out the sample size. Examples are also provided as reference. A statistical hypothesis test is a method of making decisions using data from a scientific study. We have an original belief and now we have some evidence to suspect that the original belief is wrong and need to be updated. Then we carry out hypothesis test to see whether the evidence is significant or not to disprove the original belief. In statistics, a result is called statistically significant if it is unlikely to occur by chance alone according to a pre-determined threshold probability called the significance level. Definition of terms for hypothesis test and scientific experiment ^[4] Top 1. Sample size (N) The number of patients or experimental units required for the trial 2. Treatment The effect of certain process, intervention or change on objects 3. Null hypothesis (H[0]) A general or default position, i.e. no relationship between two treatments or a proposed medical treatment has no effect. Experiment aims at rejecting the null hypothesis in a scientifically and statistically significant sense, i.e. to prove the original belief to be false and update our conception. We reject the null hypothesis if it is not likely to occur. 4. Alternative hypothesis (H[1]/H[a]) It is the alternative to the null hypothesis, suggesting there is relationship between two treatments in an unknown direction (two-sided) or a specific direction (positive or negative, one side). It is the position that our new evidence is suggested or is the position that we want to update our original belief. An example of null hypothesis and two-sided alternative hypothesis is testing equality of means in one sample design: 5. Test statistic It is a numerical summary or function of the observation, e.g. the mean of sample In hypothesis test, we consider whether the value of observed test statistic is extreme by its distribution under the null hypothesis. 6. Statistically significance Statistical significance is a statistical assessment of whether observations reflect a meaningful pattern rather than a pattern by chance. In statistics, a test statistic is believed to be statistically significant if it is more extreme than critical value, i.e. in the rejection region. Test Statistic 7. Significance level (α) It is a desired parameter of a cutoff probability in experimental design to determine whether an observed test statistic is extreme or not. α is usually set to be 0.05, 0.025 or 0.01. We reject the null hypothesis if the probability of the observed test statistic to appear is smaller than α. 8. Critical value It is the marginal value corresponding to a given significance level α. This cutoff value determines the boundary that leads to the decision of rejecting or not the null hypothesis. 9. p-value It is the probability to obtain a new test statistic which is equal or more extreme than the original observed test statistic. A small p-value indicates that it is unlikely to get the value of the observed test statistic. We reject the null hypothesis if p-value is smaller than α. 10. Type I error It is rejecting the null hypothesis when it is true, i.e. false positive. α is the probability of type I error. It equals to the significance level in a simple null hypothesis. 11. Type II error It is not rejecting the null hypothesis when it is false, i.e. false negative. Or say, not accepting the alternative hypothesis when it is true. β is the probability of type II error. The power of test equals to 1-β 12. Power of test The probability that a clinical trial will have a significant result, i.e. have a p-value less than the specified significance level α. This probability is computed under the assumption that the treatment difference or strength of association equals the minimal detectable difference. The above figure shows the distribution of a test statistic X under null and alternative hypothesis. As one increases sample size, the spread of the distributions in the above figure decreases, i.e.βdecreases (power increases). Thus if the statistical test fails to reach significance, the power of the test becomes a critical factor in reaching an inference. It is not widely appreciated that the failure to achieve statistical significance may often be related more to the low power of the trial than to an actual lack of difference between the competing therapies. Clinical trials with inadequate sample size are thus doomed to failure before they begin. Thereforeone should take steps to ensure that the power of the clinical trial is sufficient to justify the effort involved.^[21] 13. Minimal detectable difference The smallest difference between treatments you desire to be able to detect. It is the smallest difference to be clinically important and biologically plausible in clinical trial. 14. One-sided test It is a test for particular direction, stated in the alternative hypothesis. For example, it can be, choosing one of the directions in alternative hypothesis. 15. Two-sided test It is a test for both directions, stated in the alternative hypothesis. For example, E.g. One-sided test and two-sided test with same significance level = 0.05: Two types of experimental design Top Parallel design ^[9] It is a design for a clinical trial in which a patient is assigned to receive only one of the study treatments. It compares the results of a treatment on two separate groups of patients. The experimental units (patients) are put into 2 groups randomly and each group receives one and only one treatment. Then the results of treatment in two groups are compared. Conducted properly, it provides assurance that any difference between treatments is in fact due to treatment effects (or random chance), rather than some systematic differences between the groups of subjects. For example, let and be the mean of the response of the study endpoint of interest. Also let and be the inter-subject variance and intra-subject variance, respectively. Assuming the equivalence limit is , , where and (by Chow and Wang,2001) Crossover design ^[10] It is a design for a clinical trial in which a patient is assigned to receive more than one of the study treatments. It is a repeated measurements design such that each patient receives different treatments during the different time periods. It compares the results of a set of treatments on the same group of experimental units (patients). So in the design each patient serves as his/her own matched control. The sequence of treatment received in each experimental unit is random. For example, subject 1 first receives treatment A, then treatment B, then treatment C. Subject 2 might receive treatment B, then treatment A, then treatment C. It has the advantage of eliminating individual subject differences from the overall treatment effect, thus enhancing statistical power. On the other hand, it is important in a crossover study that the underlying condition not changes over time, and that the effects of one treatment disappear before the next is applied. Therefore, it is usually use to study chronic disease and there is a wash-out period between each treatment to prevent carryover effect. For example, define and assume that the equivalence limit is , then , where (by Chow and Wang,2001) Various types of test hypothesis ^[2] ^[11] Top Recall that the null hypothesis is a general or default position and is the position that we want to disprove or reject. Alternative hypothesis is the position opposite to the null hypothesis and is the position that the new belief is suggested. Hypothesis test check whether the evidence is significant or not to reject the null hypothesis (the original belief) and establish a new belief (the alternative hypothesis). It tests for the equality of a sample value with a targeted constant value or tests for the equality between treatment and active control/placebo. Assume larger value indicates better performance. Null hypothesis states that the sample value equals the targeted value. Alternative hypothesis is the sample value is not equal to the targeted value in either direction. In two samples cases, testing equality is testing whether the values from 2 samples equal or not, i.e. 2. Testing non-inferiority/ superiority ^[14] Where δ>0 is the non-inferiority margin, or called superiority margin Here [ ]represents the standard approved treatment/product; T represents the new treatment/product. Or say, │Test │Null hypothesis│Alternative hypothesis │ │Non-inferiority│ │ │ │Superiority │ │ │ │Equivalence │ │ │ T: Treatment C: Control Assuming that the values to the right of zero correspond to a better response with the new drug so that the values to the left indicate that the control is better, Non-inferiority means a treatment at least not appreciably worse than an active control/placebo by the non-inferiority margin δ. That means the new treatment does not perform poorer than the active control/placebo appreciably. This corresponds to the inequality suggested in the alternative hypothesis. Conversely, inferiority means that a treatment is poorer than an active control/placebo by the non-inferiority margin δ. Superiority means a treatment is more effective than the active control by the superiority margin δ, stated in the alternative hypothesis. Conversely, non-superiority means that a treatment is not better than an active control/placebo by the superiority margin δ. There are two types of superiority hypotheses, the above hypotheses are known as hypothesis for testing clinical superiority. When δ=0, the above hypotheses are referred to as hypotheses for testing statistical superiority. It may be confusing if you see this title the first time as when something, say “A”, is not inferior to “B”, it means that “A” is not too worse than “B”, but not necessarily to be superior to (better than) “B” and vice versa. Then how come we are testing for non-inferiority/ superiority. Actually Testing non-inferiority/ superiority are two separate tests using the same setting of H[0 ]and H[a], but with different signs in margin. Assume that larger value of T represents better performance, if the margin is -δ, then H[0] means that test drug is inferior to the control. H[1] is the non-inferiority of the test drug. If the margin is δ, then H[0 ]means test drug is not superior to control. H[1] is the superiority of the test drug. There is also confusion with the above Testing equality. For Testing Equality, the equation corresponds to equality is stated in null hypothesis and it is what we want to reject. Actually we expect to have difference between two treatments, as stated in the alternative hypothesis. But by convention, we call this as testing equality. Compared to Testing equality, in testing non-inferiority/ superiority, non-inferiority /superiority is stated in the alternative hypothesis. The opposite, which is inferiority/non superiority of treatment and control, is stated in the null hypothesis. That is we expect to the test drug to be superior/ not inferior to the control. We put what we expect in the alternative hypothesis. E.g. In a test of superiority, to examine the effect of a test drug, H[0 ]is the response of test drug is less than that of placebo by δ. H[a ]is the response of test drug is greater than that of placebo. The test helps us to see whether the test drug is superior to the placebo by an amount of δ. In two sample cases, testing non-inferiority/ superiority compares the values from two samples, i.e. Sample Size Determination ^[11] For a superiority trial (S), the necessary sample size (N) depends on δs, the clinically important difference. For a non-inferiority trial (NI), the necessary sample size depends on δ[NI], the upper bound for non-inferiority. When δ[NI] =δ[S], the necessary sample size for the non-inferiority trial is the same as superiority trial under the assumption of T-T[0] = 0 On the other hand, δ[S] is typically larger than δ[NI], which causes the sample size for a non-inferiority trial often to be much larger than that of a superiority trial. where δ > 0 is the margin of clinically accepted difference, called equivalence margin. Here equality and equivalence are two different concepts. Equality only focuses on whether the values are equal or not. Equivalence means the difference of treatment and active control is within specific amount (δ) in either direction (positive or negative) Note that the statement of equivalence is stated in the alternative hypothesis. The inequality in the null hypothesis means that the treatment and control are not equivalence. That means this test aims at proving the treatment and control are equivalence, therefore this new belief is put in the alternative hypothesis. Null hypothesis states that the difference is at least δ. Alternative hypothesis states that the difference is less than δ, i.e. equivalence. In two sample cases, testing equivalence compares the values from two samples, i.e. Proportions Top It is used to determine the required sample size for a desired power of test and control the length of confidence intervals of the difference of proportions not exceeding certain value, compared with two samples parallel design test that only tests for the difference of proportions with a desired power. The value of length is a bound to the length of confidential interval. It is chosen relative to the expected length of the confidence interval, which is calculated by the formula on the webpage. E.g. the expected length calculated is 0.141. Then you can find the sample size with the bound of length to be, say 0.2. For binomial success probabilities, let π[1] andπ[2 ]denote the success probabilities of interest and let Here are the large-sample normal approximations, as the exact results are very complicated and the approximate results usually suffice for sample size determination. For reference: ^[17] First consider the problem of testing H[0]:Δ=0 against H[1] :Δ0. Based on a sample of size n from each distribution, let p[1] and p[2] denote the observed proportions of successes, , With the hypothesis testing problem, Fleiss uses approximations based on the asymptotic normality of the estimates to construct a confidence interval for A. The approximate (1 -α) 100 percent confidence interval forΔis and thus the associated hypothesis testing procedure is: Rule: Reject Ho in favour of H1, if 0 is not an element of I, where I is the interval given in (1). The length of the confidence interval given in (1) is Of course, this is a random variable and thus cannot be controlled. If the variance of the two normal distributions described in the previous section had been unknown, then the length of the resulting confidence interval for the difference of the means is based on the Student’s t-distribution and is also a random variable. One approach to this problem is the determination of the expected length. The exact result is difficult to obtain and is unnecessary for the problem of sample size determination. An approximation to this expected length is: Let n[L] denote the sample size required to have L*=L[0], a specified positive value. It is straightforward to show that where . Note that, for fixedπ[1, ]n[L] is maximized at, and symmetric about, π[2] =0.5. Further, n[L][ ]is symmetric in π[1] and π[2] and is maximized at Casagrande, Pike & Smith method ^[18] It is a simple but accurate sample size approximation method for comparing two binomial probabilities. It is shown that over fairly wide ranges of parameter values and ratios of sample sizes, the percentage error which results from using the approximation is no greater than 1%. You can choose one-sided or two-sided test in this method. To find the minimum n to achieve a power of 100βpercent an iterative procedure is required. This involves very extensive calculations and numerous approximations have thus been suggested. The two most commonly employed are: 1. The "arcsin formula" as given, for example, in Cochran and Cox (1957) 2. The "uncorrected x2 formula" as given, for example, in Fleiss (1973) Casagrande, Pike & Smith method is a Derivation of Corrected χ^2^ and is tested to be of good approximation over fairly wide ranges of values. For details in calculation, please read “Casagrande, Pike and Smith (1978) Biometrics 34: 483-486” Survival Analysis^ [5] Top Survival analysis is the study of time between the entry into observation and the substantial events. That means we observe the time needed until an event occur. The events include death, relapse from remission, onset of a new disease and recovery. It usually involves following the patients for a long time. Some common terms in survival analysis: ^[12] 1. Survival function [S(t)] where T is the time of failure or death It is the chance that the subject survives longer than some specified time t. 2. Hazard rate /Hazard function [λ(t)] It is the instantaneous risk of occurrence of an event at time t, given the subject survives until time t or later. Or it is the probability of failure in an infinitesimally small time period (t, t+Δt) conditional on survival until time t or later (that is, T ≥ t) It is a risk measure. The higher the hazard function, the higher the chance of failure in the particular period. Hazard function is a non-negative function. It can be increasing or decreasing. 3. Hazard ratio (δ) It is the ratio of the hazard rate in control group and hazard rate in experimental group. It gives an instantaneous comparison between risk of failure in experimental group and control group. 4. Censoring This refers to the value of an observation is only partly known. In survival analysis, that is we have some information about the survival time of a subject, but do not know exactly when it fails. It happen when a person does not encounter the event before the study ends, which is called “administratively censored” or a person is lost to follow-up during the study. Prospective study is a study that follows over time a group of similar individuals who differ in certain factors under study so as to determine the effect of these factors on the rate of outcome. For survival analysis, the usual observation strategy is prospective study. You start to observe in certain well-defined time point and the follow the patients for some substantial period and finding out the time needed for an event to occur. Note that the 4 methods below are not exact. They describe the power function of the log-rank test only under the assumptions of a population model, but not distribution-free. Thus, these methods have been proposed as approximations under the restrictive assumptions of proportional constant hazards, i.e., under the exponential model. In reality, the hazards will not be constant, nor exactly proportional, over time. The log-rank test will still be applicable but may not be maximally efficient. Thus, it is our view that these methods should be cautiously applied using "worst-case" assumptions, such as using the lowest plausible hazard rate for the control group, the smallest clinically relevant reduction in mortality, and appropriate adjustments for departures from the usual assumptions of uniform patient recruitment, complete follow-up, full compliance, and homogeneity over prognostic strata. But each model has certain generalization to release the assumption. ^[19] Reference: Lachin and Foulkes (1986) Biometrics 42: 508 Prospective study and retrospective study ^[1] Top A prospective study watches for outcomes, such as the development of a disease, during the study period and relates this to other factors such as suspected risk or protection factor(s). The study usually involves taking a cohort of subjects and watching them over a long period. The outcome of interest should be common; otherwise, the number of outcomes observed will be too small to be statistically meaningful (indistinguishable from those that may have arisen by chance). All efforts should be made to avoid sources of bias such as the loss of individuals to follow up during the study. Prospective studies usually have fewer potential sources of bias and confounding than retrospective studies. A retrospective study looks backwards and examines exposures to suspected risk or protection factors in relation to an outcome that is established at the start of the study. Most sources of error due to confounding and bias are more common in retrospective studies than in prospective studies. For this reason, retrospective investigation is often criticized although it tends to be cheaper and faster than prospective study. In addition, retrospective cohort studies are limited to outcomes and prognostic factors that have already been collected, and may not be the factors that are important to answer the clinical question. If the outcome of interest is uncommon and the required size of prospective study to estimate relative risk is often too large to be feasible, the odds ratio in retrospective study provides an estimate of relative risk. You should take special care to avoid sources of bias and confounding in retrospective studies. Comparison of Survival Curves Using Historical Controls Top It determines the number of patients needed in a prospective comparison of survival curves, when the control group patients have already been followed for some period. Explanation to variables: α is the significance level for the test, usually 0.05 δ is the minimum hazard ratio, which is calculated by dividing the estimated hazard rate of control group by that of experimental group M[S][ ]is the median survival time in month in the control group, which can be estimated from existing control data. r is the accrual rate. It is the rate of arrival of patients per month. It is estimated for future accrual. n[C] and y[C] are the number of deaths observed and the number of patients still at risk in the historical control respectively. Both are obtained from existing control data. τ is the length of the planned continuation period for the study in months T is the length of accrual period for the new study. It is the time needed to recruit patients into the trial. Base on the accrual rate and the required number of accrual target and power of test, you can find an appropriate accrual period to achieve desired power of test and required sample size. So you adjust the variable T (accrual period) until desire power is obtained (e.g. 80%). Several assumptions are made in this model. Firstly, It assumes time to survival is exponential distributed with hazard rate λ. Secondly, It assumes prospective studies are used. It also assumes no withdrawal or losses to follow-up throughout the study. The detailed calculation and formulae are shown in the Formula section on the calculator page. Large randomized trials require longer time and higher cost, therefore the pilot investigations should be carefully designed and analyzed. There are also diseases in which outcome is very predictable based on known prognostic features and historically controlled studies may be viewed as an alternative to randomized clinical trials. For some rare diseases, historically controlled study is suitable.^ The accrual requirement declines as (i) the accrual rate declines,(ii) δ increases, (iii) median survival in the controls decreases, (iv) the number of historical controls increases, and (v) the number of failures already observed in the control group increases.^ [20] Reference: Dixon & Simon (1988) J Clin Epidemiol 41:1209-1213 Comparison of Two Survival Curves Allowing for Stratification Top Stratification means patients are divided into homogeneous sub-groups called strata by a prognostic factor such as severity of disease. Other properties can also be used such as Age > 50 or not, or male and female. In the calculator 2 strata can be set. Explanation to variables: α is the significance level for the test β is the probability of type II error, or (1-power) of the test K is the weight assigned to each stratum, identifying which one is more significant to the result. It is usually proportional to sample size in each stratum. δ is the minimum hazard ratio. It is calculated by dividing the estimated hazard rate of control group by that of experimental group M[S][ ]is the median survival time in month in the control group, which can be estimated from existing control data. The sample fractions of control group (Q[C])[ ]and experimental group (Q[E]) can be difference in each stratum and across two strata: T[0] is the accrual period in month. It is the length of time to recruit patients for study in each stratum. T-T[0] is the follow-up period in month. It is the continuation period of all recruited patients to the end of study T in each stratum. For detailed formula and theory, please check the Formula section on the calculator page. Comparison of Two Survival Curves – Rubinstein Top It is the determination of the number of patients needed in a prospective comparison of survival curves with losses to follow-up, when the control group patients have already been followed for some period. The explanation to variables is the same as above “Allowing for Stratification” one. Unlike the other model that only assume time to survival is exponential distributed with hazard rate λ, several assumptions are made under this model. First, the arrival of patients is modeled by a Poisson process with rate n per year. Then the patient is randomly assigned to the experimental group or control group, with equal probability each. Second, the survival time for a patient is assumed to follow exponential distribution and is independent to each other. Third, the time until loss to follow-up is assumed to follow exponential distribution and is also independent to each other. The explanation to variables is the same as above “Allowing for Stratification” one. For detailed formula and theory, please check the Theory section on the calculator page. Comparison of Two Survival Curves – Lachin Top It determines the number of patients needed in a prospective comparison of survival curves, when the control group patients have already been followed for some period. It only assumes the survival time is exponential distributed with hazard rate λ. In determination of sample size, it specifies the minimal relevant difference The explanation to variables is the same as above “Allowing for Stratification” one. For detailed formula and theory, please check the Formula section on the calculator page. Phase II clinical trials ^[3] Top Phase II clinical trial typically investigates preliminary evidence of efficacy and continues to monitor safety. There are three main objectives in treating patients in Phase II clinical trials. The primary objective is to test whether the therapeutic intervention benefits the patient. The second objective is to screen the experimental treatment for the response activity in a given type of cancer. The final objective is to extend our knowledge of the toxicology and pharmacology of the treatment. It involves usually fewer than 50 patients. Patients accrue in several stages in a multiple testing procedure, testing being performed at each stage after appropriate patient accrual has been completed. The number of patients accumulates. This feature is particularly appealing in a clinical setting where there are compelling ethical reasons to terminate a Phase II trial early if the initial proportion of patients experiencing a tumor regression is too low or too high. Phase II trials decide whether the new treatment is promising and warrants further investigation in a large-scale randomized Phase III clinical trial. Phase II clinical trials are generally single-arm studies, but may take the form of multiple-arm trials. Multiple-arm trials can be randomized or non-randomized with or without control arms. It aims at estimating the activity of a new treatment. These “pilot” studies are commonly applied to anticancer drugs to assess the therapeutic efficacy and toxicity of new treatment regimens. Phase II clinical trials are only able to detect a large treatment improvement, e.g. greater than 10%. To detect a small difference in treatment, e.g. less than 5%, one would require a much larger sample size, which is not possible in Phase II studies due to the limited number of subjects eligible for the study and the large number of treatments awaiting study. Phase II studies are prominent in cancer therapeutics as new treatments frequently arise from combinations of existing therapies or by varying dose or radiation schedules. An important characteristic of some Phase II trial designs is the use of early stopping rules. If there is sufficient evidence that one of the treatments under study has a positive treatment effect, then patient accrual is terminated and this treatment is declared promising. Also, if a treatment is sufficiently shown not to have a desirable effect, then patient accrual is terminated and this treatment is declared not promising. Difference between Fleming’s procedure and Bayesian design of Phase II clinical trials^ Top This section describes both the hypotheses and design for Fleming’s and the Bayesian approach to single-arm Phase II clinical trials. Both designs are used for Phase II clinical trials with binary outcomes and continuous monitoring. The fundamental difference between the two designs is the frequentist basis for Fleming’s procedure only depends on the observed results whereas the Bayesian approach uses prior information (Information from previous studies). The testing procedure for Fleming’s procedure is based on the normal approximation to the binomial distribution of the observed number of treatment responses. The resulting decision boundaries, r[g] and a[g], are solved analytically. The Bayesian design incorporates prior information about the treatment being investigated with the observed results to yield revised beliefs about the treatment. The testing procedure is based on the posterior probability of the experimental treatment given the observed data. The posterior probability is a conditional probability computed from a beta distribution which results in the upper and lower decision boundaries, U[n] and L[n], which are evaluated using numerical integration, namely “Simpson’s Composite Algorithm”. Another difference between the two designs is that Fleming’s procedure has only two outcomes at the final recruitment stage, i.e. reject or accept H[0], while the Bayesian design traditionally allows for an inconclusive trial at the final stage (After attaining the maximum sample size set). Simon’s Randomized Phase II Design ^[8] Top In phase II clinical trial, randomized design is proposed to establish the sample size for the study to obtain the treatment with greatest response rate for further / phase III clinical trial. There are some advantages for randomized design: 1. Randomization helps ensure that patients are centrally registered before treatment starts. Establishment of a reliable mechanism to ensure patient registration prior to treatment is of fundamental importance for all clinical trials. 2. Comparing to independent phase II studies, the differences in results obtained for the two agents will more likely represent real differences in toxicity or antitumor effects rather than differences in patient selection, response evaluation, or other factors. 3. In randomized phase II clinical trials, one is merely making a rational choice of one arm and is free of any burden to prove statistically that the selected arm is superior. Although it is desirable to select the best treatment, selecting an arm that is equivalent to another or even slightly worse is not considered too grave a mistake. Hence, the error rate to control is the probability of erroneously selecting an arm whose response rate is lower than that of the best arm by an amount with medical importance (for example, 10%). Similarly, the relevant power is the probability of correctly selecting an arm whose response rate is larger than that of the second-best arm by an amount with medical importance (for example, 15%).^[13] The formulae for the probability are shown on the calculator page. Explanation to variables: p is the Lowest response rate among all k treatments k is the number of treatment arms D is the difference in true response rates of the best and the next best treatment Confidence Interval Top Confidence interval (C.I.) is a range providing an interval estimate to true but unknown population parameter and is used to indicate the reliability of an estimate. It is an observed interval calculated from the particular sample, in principle different from sample to sample. The confidence level (1-α) is the proportion of confidence intervals that cover the true parameter, i.e. a 95% C.I. is the interval that you are 95% certain contains the unknown population true value. Its relation with hypothesis test is that the 100(1-α)% confidence interval of the test statistic is the acceptance region of a 2-sided hypothesis test. If the test statistic is more extreme than the upper or lower bound of the confidence interval, the null hypothesis is rejected. The significance level of the test is the complement of the confidence level. One sample proportion Top Proportion is the number of success divided by the sample size. The calculator gives a confidential interval for the estimate. Two sample proportions Top It compares two proportions from independent samples and provides a confidential interval. Confidence intervals of difference not containing 0 imply that there is a statistically significant difference between the population proportions. Correlation Top Correlation indicates whether two variables are associated. It is a value from -1 to 1 with -1 representing perfectly negative correlation and 1 representing perfectly positive correlation. The two variables should come from random samples and have a Normal distribution (or after transformation). The confidence interval is a range which contains the true correlation with 100(1-α)% confidence. Single incidence rate Top Incidence rate is the rate at which new clinical events occur in a population. It is the number of new events divided by the population at risk of an event in a specific time period, sometimes it is the person-time at risk. Incidence is different from prevalence, which measures the total number of cases of disease in a population. Thus, incidence carries information about the risk of having the disease, while prevalence indicates how widespread the disease is. Relative Risk and Attributable Risk Top │ │Disease │No disease│Totals │ │Exposed │a │b │n[1]=a+b │ │Non-exposed │c │d │n[2]=c+d │ │Totals │m[1]=a+c│m[2]=b+d │N=n[1+]n[2] │ Relative Risk is the ratio of incidence of disease in Exposed group to that in Non-exposed group from a cohort/prospective study. If Relative Risk is larger than 1, it is a positive association. If it is smaller than 1, it is a negative association. Attributable Risk is the amount of disease incidence which can be attributed to an exposure in a prospective study. Population Attributable Risk is the reduction in incidence if the whole population were unexposed, comparing with actual exposure pattern. Relative Risk compares the risk of having a disease for not receiving a medical treatment against people receiving a treatment. It can also compare the risk of having side effect in drug treatment against the people not receiving the treatment. Attributable Risk and Population Attributable Risk tell the amount of risk prevented if we do not have certain exposure. Exposed group is the group of patients exposed to certain factors of interest such as a new treatment, age 45 or above or smoking for 10 years or above. Odds Ratios, ARR, RRR, NNT, PEER ^[6] Top │ │Outcome Positive │Outcome Negative │Totals │ │Feature Present│a │b │n[1]=a+b │ │Feature Absent │c │d │n[2]=c+d │ │Totals │m[1]=a+c │m[2]=b+d │N=n[1]+n[2]│ Odds Ratio (OR) refers to the ratio of the odds of the outcome in two groups in a retrospective study. It is an estimate for the relative risk in a prospective study. Absolute Risk Reduction (ARR) is the change in risk in the 2 groups and its inverse is the Number Needed to Treat (NNT). Patient expected event rate (PEER) is the expected rate of events in a patient received no treatment or conventional treatment. The Z-test for Odds Ratio shows whether the exposure affects the odds of outcome. OR=1 means exposure has no effect on the odds of outcome. OR>1 means exposure leads to higher odds of outcome and vice versa. The Z-test for 2 Proportions shows whether there is difference between the proportions of events in 2 groups. The Chi-square test for Association tests the association between the groups of feature and test result. Diagnostic Statistics ^[7] Top │ │Disease │No disease │Totals │ │Test Outcome Positive │a (True Positive) │b (False Positive)│n[1]=a+b │ │Test Outcome Negative │c (False Negative)│d (True Negative) │n[2]=c+d │ │Totals │m[1]=a+c │m[2]=b+d │N=n[1]+n[2]│ Sensitivity is the ability of the test to pick up what it is testing for and Specificity is ability to reject what it is not testing for. Likelihood ratios determine how the test result changes the probability of certain outcomes and events. Pre-test and Post-test probabilities are the subjective probabilities of the presence of a clinical event or status before and after the diagnostic test. For positive test, we find the positive post-test probability and for negative test, we find the negative post-test probability. McNemar’s Test Top │ │Test 2 Positive│Test 2 Negative│Totals │ │Test 1 Positive│a │b │n[1]=a+b │ │Test 1 Negative│c │d │n[2]=c+d │ │Totals │m[1]=a+c │m[2]=b+d │N=n[1]+n[2]│ McNemar’s Test is a test on a 2x2 contingency table. It checks the marginal homogeneity of two dichotomous variables. It is used for data of the two groups coming from the same participants, i.e. paired data For example, it is used to analyze tests performed before and after treatment in a population. 1. http://www.statsdirect.com/help/basics/prospective.htm 2. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2701110/ 3. https://onlinecourses.science.psu.edu/stat509/node/22 4. http://hedwig.mgh.harvard.edu/sample_size/quan_measur/defs.html 5. http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf 6. http://www.cebm.net/index.aspx?o=1044 7. http://ceaccp.oxfordjournals.org/content/8/6/221.full 8. http://www.nihtraining.com/cc/ippcr/current/downloads/SV.pdf 9. http://www.statistics.com/index.php?page=glossary&term_id=439 10. http://www.statistics.com/index.php?page=glossary&term_id=424 11. http://www.nyuhjdbulletin.org/mod/bulletin/v66n2/docs/v66n2_16.pdf 12. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3227332/ 13. http://onlinelibrary.wiley.com/doi/10.1002/sim.5829/full 14. http://www.scielo.br/scielo.php?pid=S1677-54492010000300009&script=sci_arttext&tlng=en 15. http://annals.org/article.aspx?articleid=736284 16. https://onlinecourses.science.psu.edu/stat504/node/19\ 17. Bristol (1989) Statistics in Med, 8:803-811 18. Casagrande, Pike and Smith (1978) Biometrics 34: 483-486 19. Lachin and Foulkes (1986) Biometrics 42: 507-519 20. Dixon & Simon (1988) J Clin Epidemiol 41:1209-1213 21. Lachin (1981) Controlled Clinical Trials 2: 94
{"url":"https://www2.ccrb.cuhk.edu.hk/stat/User%20guidance,%20definition%20and%20terminology%20(Online%20Help).htm","timestamp":"2024-11-10T07:28:40Z","content_type":"text/html","content_length":"790928","record_id":"<urn:uuid:65d465ca-b8a4-4ac5-b89a-16740ae0f3ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00377.warc.gz"}
Integration of simple rational fractions. Integration of rational fractions 1. Selecting the proper rational fraction 2. Integrating Proper Rational Functions 3. Integrating Improper Rational Functions Selecting the proper rational fraction in which the denominators are factors of distinct factors of the form From these factors we can determine the form of the partial fraction decomposition using the following two rules: Linear Factor Rule: For each factor of the form m partial fractions: where A[1], A[2], . . ., A[m] are constants to be determined. Quadratic Factor Rule: For each factor of the form m partial fractions: where A[1], A[2], . . ., A[m], B[1], B[2], …, B[m] are constants to be determined. I. Integrating Proper Rational Functions Example 10: Find multiplying by Example 11: Find by x = 2, then x = 3, then II. Integrating Improper Rational Functions Although the method of partial fractions only applies to proper rational functions, an improper rational function can be integrated by performing long division (or synthetic division). If Example 12: Find Date: 2015-01-02; view: 1269 <== previous page | next page ==> MAIN METHODS OF INTEGRATION | Type 5.
{"url":"https://doclecture.net/1-4191.html","timestamp":"2024-11-02T02:13:53Z","content_type":"text/html","content_length":"13778","record_id":"<urn:uuid:450fc7b4-a54e-4b10-af55-ef7f339ecd7b>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00006.warc.gz"}
So I’m training to be a mathematician. More specifically I’m currently a PhD students in mathematics. I really, really love maths but I think the way it’s taught in schools is really, really bad. You’re learning about things like calculus or trigonometry, but rarely are you learning about the really cool or mindblowing mathematical facts. And that makes a bit of pedagogical sense. I could tell you a math fact but unless you have that base of mathematical knowledge, you wouldn’t know why it’s true. Still, I always loved it when my teachers took a day out to teach us about something weird and fascinating. Learning about trigonometry or calculus can be fun, but it’s mostly like learning how to use a screwdriver. Important if you’re learning how to build a car, but if you’ve never seen a car before in your life the screwdriver will seem pointless. So I wanted to talk about some of my favorite math facts, theorems and objects here. I won’t go into much detail about why any of this stuff is true, but I will talk about why they’re really cool. And I hope you get some appreciation for the weird and wonderful mathematics that’s out there. If you want higher quality math content, some of my favorite math youtube channels include Stand -up Maths, 3 Blue 1 Brown and Numberphile. Do channels with millions of subscribers need a signal boost from me? No, but eh, it’s my void. Anyways, onto some fun math facts.
{"url":"https://tertiary.neocities.org/mathfacts","timestamp":"2024-11-10T21:58:02Z","content_type":"text/html","content_length":"4160","record_id":"<urn:uuid:b5e24934-2529-48b2-b02d-9e36cadfc4e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00527.warc.gz"}
o 1 Function Worksheet Mathworksheetsgo.com is now a part of Mathwarehouse.com. All of your worksheets are now here on Mathwarehouse.com. Please update your bookmarks! Students will practice classifying relations (both graphs, equations and sets of ordered pairs) as a function, a one to one function or neither. Example Questions
{"url":"https://www.mathwarehouse.com/sheets/algebra-2/functions-and-relations/1-to-1-function-worksheet1.php","timestamp":"2024-11-06T05:35:45Z","content_type":"text/html","content_length":"43904","record_id":"<urn:uuid:29f08f7d-9af6-48ef-aed1-7dc7ea2324ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00329.warc.gz"}
Betting odds Creation date Last substantive revision date Last modification date Generated on Completion status Betting someone that is true or happens with an odds of -to- and a stake of means that if you lose ( is false or does not happen), you pay the stake. If you win ( is true or happens), you keep the stake, and, in addition, win . Taking a bet from someone that is true or happens with an odds of -to- and a stake of means that if is false or does not happen, then you collect the stake. If actually is true or does happen, then you lose and pay . The above implies the following for the person betting: if they believe that the the probability of being true or happening is , then their expected profit following the bet is So if precisely, then their expected profit following the bet is Similarly, for the person taking the bet, if they believe the probability of being true or happening is , then their expected profit following the bet is So if precisely , the expected profit following the bet is the same as when : In other words, if both sides of the bet believe in at the neutral probability derived from the betting odds (i.e. they’re betting at the stated odds), then neither stands to make a profit (in expectation). This is what Eliezer Yudkowsky means when he says: Taking a bet at 99-1 does not mean I think the probability is 1%. It means I think the probability is enough less than 1% that I stand to make a profit even after adverse selection is taken into account. That is how betting is supposed to work. When good rationalists are virtuously betting with each other over a disagreement, neither should feel that they are betting at the true odds. If two people both think they are betting at the true odds, they must agree on the odds, so they must agree on the probability, so they must agree on the credibility, so what on Earth are they betting about? Example 1 We take an example from Noah Smith’s “Bets do not (necessarily) reveal beliefs”. First, DeLong gives Smith 50-to-1 odds that inflation would go over 5%. Let the stakes for Smith be . This means that if inflation goes over 5%, Smith wins, and gets . If inflation stays under 5%, Smith loses, and loses the from the stakes. Next, Smith gives Chovanec 25-to-1 odds that inflation would stay under 5%. This means that if inflation goes over 5%, Smith loses, and must pay . On the other hand, if inflation stays under 5%, Smith wins, and wins the stakes of Chovanec, namely . (Perhaps it’s easier to see this looking from Chovanec’s view: ) This means that, overall, if inflation goes over 5%, then Smith gets: , or “25 pizza dinner equivalents”, since was the cost of a pizza dinner. On the other hand, if inflation stays under 5%, then Smith gets: , or breaks even. Example 2 Alex Tabarrok in “A Bet is a Tax on Bullshit” gives the example of Nate Silver betting on the outcome of the presidential election. Tabarrok says: A properly structured bet is the most credible guarantor of rigorous disinterest. In order to prove his point, Silver is not required to take the Obama side of the bet! At the odds implied by his model (currently between 3 and 4 to 1) Silver should be willing to take either side of a modest bet. Indeed, we could hold a coin toss, heads Silver takes the Obama side, tails he takes Romney. In fact, the NYTimes should require that Silver, and other pundits, bet their beliefs. Furthermore, to remove any possibility of manipulation, the NYTimes should escrow a portion of Silver’s salary in a blind trust bet. In other words, the NYTimes should bet a portion of Silver’s salary, at the odds implied by Silver’s model, randomly choosing which side of the bet to take, only revealing to Silver the bet and its outcome after the election is over. A blind trust bet creates incentives for Silver to be disinterested in the outcome but very interested in the accuracy of the forecast. Suppose Silver thinks Obama will win with odd 3-to-1, and suppose he’s willing to stake . Now, we can make a tree diagram of all the possibilities: Obama / \ Romney .5 / \ .5 / \ /------/ \------\ / \ O wins /\R wins O wins /\ R wins So the expected value is:
{"url":"https://issarice.com/betting-odds","timestamp":"2024-11-06T07:12:55Z","content_type":"application/xhtml+xml","content_length":"72624","record_id":"<urn:uuid:2434fb4d-9239-4154-8ec7-5ca9486ac476>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00825.warc.gz"}
October 2 Simpler, Easier! In a recent paper, Simply Easy! (An Implementation of a Dependently Typed Lambda Calculus) , the authors argue that type checking a dependently typed language is easy. I agree whole-heartedly, it doesn't have to be difficult at all. But I don't think the paper presents the easiest way to do it. So here is my take on how to write a simple dependent type checker. (There's nothing new here, and the authors of the paper are undoubtedly familiar with all of it.) I'll start by implementing the untyped lambda calculus. It's a very simple language with just three constructs: variables, applications, and lambda expressions, i.e., For example, . In Haskell I'll use strings to represent variables names; it's simple and easy. type Sym = String data Expr = Var Sym | App Expr Expr | Lam Sym Expr deriving (Eq, Read, Show) The example above represented by App (Lam "x" $ Lam "y" $ Var "x") (Lam "z" $ Var "z") . What do we want to do with the type? Well, evaluating an expression seems like the thing we need. Now, there are many degrees of evaluation to choose from, Weak Head Normal Form, Head Normal Form, Normal Form, etc., etc. They differ in exactly where there might be reducible expression lingering. To evaluate lambda expression the most important step is β-reduction. A β-reduction step can be performed anywhere a function meets an argument, i.e., an application where the function is on λ form, a.k.a. a redex. (λx.e)a reduces to e[x:=a] Where the ] notation means that all (free) occurrences of the variable in the expression are replaced by . The example above has one redex, and doing a β step yields . The other kind of reduction we will make use of is α-substitution, which is simply renaming a bound variable, e.g., can be changed to . Let's start with an easy evaluation strategy, normal order to WHNF. In WHNF we only need to make sure there the there's no redex along the "spine" of the expression, i.e., starting from the root and following the left branch in applications. Doing normal order reduction means that we do evaluate anything inside the argument of a β redex before doing the reduction. It's sometimes called lazy evaluation, but I prefer to use that term for an implementation strategy for normal order reduction. We implement normal order WHNF by walking down the spine collecting arguments (i.e., the right branch of a applications) until we reach a lambda or a variable. If we reach a variable we already have WHNF so we just reconstitute the applications again. If we hit a lambda we get to the crux of evaluation. We need to perform a β-reduction, i.e., if we have App (Lam v e) a we need to replace all (free) occurrences of the variable by the argument inside the lambda body . That's what the function does. whnf :: Expr -> Expr whnf ee = spine ee [] where spine (App f a) as = spine f (a:as) spine (Lam s e) (a:as) = spine (subst s a e) as spine f as = foldl App f as function is the only tricky part, so let's relax by first defining something easy, namely getting the free variables from an expression. The free variables are those variables that occur in an expression, but are not bound in it. We simply collect the variables in a set (using a list as a set here), removing anything bound. freeVars :: Expr -> [Sym] freeVars (Var s) = [s] freeVars (App f a) = freeVars f `union` freeVars a freeVars (Lam i e) = freeVars e \\ [i] Back to substitution. subst :: Sym -> Expr -> Expr -> Expr subst v x b = sub b where sub e@(Var i) = if i == v then x else e sub (App f a) = App (sub f) (sub a) sub (Lam i e) = if v == i then Lam i e else if i `elem` fvx then let i' = cloneSym e i e' = substVar i i' e in Lam i' (sub e') Lam i (sub e) fvx = freeVars x cloneSym e i = loop i where loop i' = if i' `elem` vars then loop (i ++ "'") else i' vars = fvx ++ freeVars e substVar :: Sym -> Sym -> Expr -> Expr substVar s s' e = subst s (Var s') e function will replace all free occurrences of , i.e., ]. The case is easy. If it's the variable we are replacing then replace else leave it alone. The case is also easy, just recurse in both branches. The case has three alternative. First, if the bound variable is the same as the one we are replacing then there can be no free occurrences inside it, so just return the lambda as is. Second, if the lambda bound variable is among the free variables in we have a problem (see below). Third case, just recurse in the body. So, what about the case when the lambda bound variable occurs in ? Well, if we just blindly continue substitution the variable will no longer refer to the same thing; it will refer to the variable bound in the lambda. That's no good. For example, take the expression , the β reduction gives (or similar), whereas doing the substitution blindly would give . Which is wrong! But it's easy to fix, just conjure up a variable, that will not clash with anything ( does that). How do we come up with a good variable? Well, we don't want to pick one that is free in the expression because that would lead to the same problem again. Nor do we want to pick a variable that is free in because that would accidentally bind something in . So we take the original identifier and tack on "'" until it fulfills our requirements. (OK, efficiency affectionados are allowed to complain a little here, but this isn't that bad actually.) Once we have a new variable we can do an α-conversion to rename the offending variable to something better. The function is a utility when we want to replace one variable with another. Another useful thing to be able to do is to compare lambda expression for equality. We already have syntactic equality derived , but it is also very useful to be able to compare expressions modulo α-conversions. That is, we'd like to compare equal to . Let's call that function alphaEq :: Expr -> Expr -> Bool alphaEq (Var v) (Var v') = v == v' alphaEq (App f a) (App f' a') = alphaEq f f' && alphaEq a a' alphaEq (Lam s e) (Lam s' e') = alphaEq e (substVar s' s e') alphaEq _ _ = False Variables and applications just proceed along the structure of the expression. When we hit a lambda the variables might be different, so we do an α-conversion of the second expression to make them equal. As the final functions, we will do reduction to Normal Form (i.e., to a form where no redexes remain) and comparison of expressions via their normal forms. nf :: Expr -> Expr nf ee = spine ee [] where spine (App f a) as = spine f (a:as) spine (Lam s e) [] = Lam s (nf e) spine (Lam s e) (a:as) = spine (subst s a e) as spine f as = app f as app f as = foldl App f (map nf as) betaEq :: Expr -> Expr -> Bool betaEq e1 e2 = alphaEq (nf e1) (nf e2) Computing the NF is similar to WHNF, but as we reconstruct expressions we make sure that all subexpression have NF as well. Note that both may fail to terminate because not all expressions have a normal form. The canonical non-terminating example is (λx.x x)(λx.x x) which has one redex, and doing the reduction produces the same term again. But if an expression has a normal form then it is unique (the Church-Rosser theorem). Here are some sample lambda terms (named for convenience): zero ≡ λs.λz.z one ≡ λs.λz.s z two ≡ λs.λz.s (s z) three ≡ λs.λz.s (s (s z)) plus ≡ λm.λn.λs.λz.m s (n s z) Or, in Haskell [z,s,m,n] = map (Var . (:[])) "zsmn" app2 f x y = App (App f x) y zero = Lam "s" $ Lam "z" z one = Lam "s" $ Lam "z" $ App s z two = Lam "s" $ Lam "z" $ App s $ App s z three = Lam "s" $ Lam "z" $ App s $ App s $ App s z plus = Lam "m" $ Lam "n" $ Lam "s" $ Lam "z" $ app2 m s (app2 n s z) And now we can check that addition works, betaEq (app2 plus one two) three will produce To do type checking we need to introduce types. A very simple system is the simply typed lambda calculus. It has one (or more) base type (think of it as Bool or Int if you want an example) and function types. In Haskell terms: data Type = Base | Arrow Type Type deriving (Eq, Read, Show) The expressions themselves will have an explicit type on the bound variable in a lambda expression. So we now have For example, . The Haskell type for expressions is data Expr = Var Sym | App Expr Expr | Lam Sym Type Expr deriving (Eq, Read, Show) The only difference is the . All the functions we had for the untyped lambda calculus can be trivially extended to the simply typed one by simply carrying the type along. So finally, time for some type checking. The type checker will take an expression and return the type of the expression. The type checker will also need the types of all free variables in the expression to be able to do this. Otherwise, what type would it assign to, say, the expression ? To represent the types of the free variables we use an environment which is simply a list of variables and their types. newtype Env = Env [(Sym, Type)] deriving (Show) initalEnv :: Env initalEnv = Env [] extend :: Sym -> Type -> Env -> Env extend s t (Env r) = Env ((s, t) : r) Type checking can go wrong; there can be type errors. To cater for this the type checker will be written in monadic style where the monad is simply an error (exception) monad. The error messages are strings, and the monad itself is the type. So is the type checking monad. type ErrorMsg = String type TC a = Either ErrorMsg a We can now write variable lookup. findVar :: Env -> Sym -> TC Type findVar (Env r) s = case lookup s r of Just t -> return t Nothing -> throwError $ "Cannot find variable " ++ s It simply looks up the variable and returns the type. If not found it throws an error with ). And then the type checker itself. tCheck :: Env -> Expr -> TC Type tCheck r (Var s) = findVar r s tCheck r (App f a) = do tf <- tCheck r f case tf of Arrow at rt -> do ta <- tCheck r a when (ta /= at) $ throwError "Bad function argument type" return rt _ -> throwError "Non-function in application" tCheck r (Lam s t e) = do let r' = extend s t r te <- tCheck r' e return $ Arrow t te For variables, just look up the type for it in the environment. For application, type check the function part and the argument part. The function should have function (arrow) type, and if it does the type of the application is the return type of the function. Finally, for a lambda expression we extend the environment with the bound variable. We then check the body, and the type of the lambda expression is a function type from the argument type to the type of the body. For convenience: typeCheck :: Expr -> Type typeCheck e = case tCheck initalEnv e of Left msg -> error ("Type error:\n" ++ msg) Right t -> t Pretty easy sailing so far. The simply typed lambda calculus is a pain to use. Take something like in the untyped world. What type should we give it? Well that depends on how we intend to use it. Maybe , maybe , maybe ... So we can no longer have one identity function; we need one for each type. What a bummer! It's as bad as C. BTW, all (type correct) expression in the simply typed lambda calculus have a normal form (Tait 1967). (Don't get me wrong, the polymorphic lambda calculus is a work of marvel.) So how can we fix the problem with one identity function for each type? We can add polymorphism! We can extend the expression language so that we also pass types around; we add type abstraction and type application. is a type abstraction, i.e., is a type variable which we can use in type expressions inside . To supply a type argument we have type application, . So the types we have would functions, base type, and type variables. And what is in the ? Well, now types have gotten so complicated that it is possible to construct types that make no sense, so we need a "type system" for the types. We call them kinds. Defining all this is Haskell would be something like data Expr = Var Sym | App Expr Expr | Lam Sym Type Expr | TLam Sym Kind Expr | TApp Expr Type deriving (Eq, Read, Show) data Type = Arrow Type Type | Base | TVar Sym deriving (Eq, Read, Show) data Kind = KArrow Type Type | Star deriving (Eq, Read, Show) But wait, there's an awful lot of duplication here. The structures on the three levels have a lot of similarities. (Oh, and we don't really need anymore now when we have variables.) BTW, this system, called System F , is (a simplified version of) what GHC uses internally to represent Haskell code. It's a beautiful system, really. Oh, the identity function, well it would be . And using it: , assuming has type To simplify and (as often happens when you simplify) generalize the expressions above we are going to squish them all into one expression data type. So will join, as will . But wait, there's nothing corresponding to . We need to add something. We could just add it as it is, but we won't. TADA, enter dependent types. Instead of the boring function type we will use a more exciting one, . What does it mean? It means that the variable can occur in . If it doesn't then it's simply the same as the old fashioned function type. If does occur it means that type u can depend on the of the argument ( ). In Haskell: data Expr = Var Sym | App Expr Expr | Lam Sym Type Expr | Pi Sym Type Type | Kind Kinds deriving (Eq, Read, Show) type Type = Expr data Kinds = Star | Box deriving (Eq, Read, Show) The new arrow type is called . We will also need more than one kind, . It's pretty easy to extend the functions from the first part to handle this expression type. There's just a few more places to recurse. Here's the code again. Absolutly nothing subtle about it. freeVars :: Expr -> [Sym] freeVars (Var s) = [s] freeVars (App f a) = freeVars f `union` freeVars a freeVars (Lam i t e) = freeVars t `union` (freeVars e \\ [i]) freeVars (Pi i k t) = freeVars k `union` (freeVars t \\ [i]) freeVars (Kind _) = [] subst :: Sym -> Expr -> Expr -> Expr subst v x = sub where sub e@(Var i) = if i == v then x else e sub (App f a) = App (sub f) (sub a) sub (Lam i t e) = abstr Lam i t e sub (Pi i t e) = abstr Pi i t e sub (Kind k) = Kind k fvx = freeVars x cloneSym e i = loop i where loop i' = if i' `elem` vars then loop (i ++ "'") else i' vars = fvx ++ freeVars e abstr con i t e = if v == i then con i (sub t) e else if i `elem` fvx then let i' = cloneSym e i e' = substVar i i' e in con i' (sub t) (sub e') con i (sub t) (sub e) To cut down on the code you could actually join the constructors since they are treated identically in many cases. I've left them separate for clarity. The function extends in the natural way to the new type, so does , but here it is anyway. nf :: Expr -> Expr nf ee = spine ee [] where spine (App f a) as = spine f (a:as) spine (Lam s t e) [] = Lam s (nf t) (nf e) spine (Lam s _ e) (a:as) = spine (subst s a e) as spine (Pi s k t) as = app (Pi s (nf k) (nf t)) as spine f as = app f as app f as = foldl App f (map nf as) So, now for the meaty part, the type checking itself. The handling of the environment is just as before, so we'll just look at the different cases for the type checking. tCheck :: Env -> Expr -> TC Type tCheck r (Var s) = findVar r s Just as before. tCheck r (App f a) = do tf <- tCheckRed r f case tf of Pi x at rt -> do ta <- tCheck r a when (not (betaEq ta at)) $ throwError $ "Bad function argument type" return $ subst x a rt _ -> throwError $ "Non-function in application" This is almost as before, but the arrow type is called now. The key thing here — and this is really where the fact that we are doing dependent types shows up — is the return type. For the simply typed lambda calculus it was just , but now can contain free occurences of the variable . Since we are returning would no longer be in scope, so we substitute the value of the argument for it. This is coolest part of the type checker. You've seen it. That's where it is. Since types can now be arbitrary expression we use to compare them instead of tCheck r (Lam s t e) = do tCheck r t let r' = extend s t r te <- tCheck r' e let lt = Pi s t te tCheck r lt return lt The lambda case is similar to before, but we return a now, so we need to include the variable name. Furthermore, to avoid nonsense like we make sure that the type we want to return actually has a valid kind itself. (The first call to is to ensure the type we're putting into the environment is valid; I'm sure there's a more elegant way to do this, but I can't remember what it is right now.) tCheck _ (Kind Star) = return $ Kind Box tCheck _ (Kind Box) = throwError "Found a Box" Everything has a type, so what's the type of Kind Star ), well it's a [] ( Kind Box ) (excuse the ugly box, I can't find the HTML version of a box). And what's the type of ? Well, you could keep going, but instead we'll stop right here. The idea is that the source language which we'll write our terms in will not allow the box to be written, so it should never occur. tCheck r (Pi x a b) = do s <- tCheckRed r a let r' = extend x a r t <- tCheckRed r' b when ((s, t) `notElem` allowedKinds) $ throwError "Bad abstraction" return t How do we check the type of the (dependent) function type? Well, we check the type of the thing to the left of the arrow, extend the environment, and then check the thing to the right. So now what (s, t) be? Well, should be types (or maybe kinds). So their types should be kinds. This leads to the following definition: allowedKinds :: [(Type, Type)] allowedKinds = [(Kind Star, Kind Star), (Kind Star, Kind Box), (Kind Box, Kind Star), (Kind Box, Kind Box)] I.e., we allow (*,*), (*,[]), ([],*), and ([],[]). What does it all mean? Here's the beauty of the lambda cube. By varying what we allow we can change what system we type check. (*,*) values can depend on values. Just this gives the simply typed λ calculus. ([],[]) types can depend on types. ([],*) values can depend on type. Include all these three and you get F[ω]. (*, []) types can depend on values. Include this one to get dependent types. With all four combination allowed you get Calculus of Construction (CoC). If you always include (*,*), but make a choice of the other 3 you get 8 choices; these are the corners of the lambda cube. All of these system have been studied. BTW, all the systems in the lambda cube have the property that a well typed expression has a normal form. (Well, the proof of this is so complicated for some of these systems that some people kinda doubt it.) Here the syntax , where "_" is some new variable not used in Identity Pairs Pair ≡ λa:*.λb:*.(c:*→((a→b→c)→c)) pair ≡ λa:*.λb:*.λx:a.λy:b.λc:*.λf:(a→b→c).f x y split ≡ λa:*.λb:*.λr:*.λf:(a→b→r).λp:(Pair a b).p r f fst ≡ λa:*.λb:*.λp:(Pair a b).split a b a (λx:a.λy:b.x) p snd ≡ λa:*.λb:*.λp:(Pair a b).split a b b (λx:a.λy:b.y) p My fingers are numb from all these greek characters, so I'll continue with examples another time. And, of course, a parser and pretty printer. Labels: Dependent types, Haskell, Lambda calculus
{"url":"https://augustss.blogspot.com/2007/10/","timestamp":"2024-11-15T00:43:43Z","content_type":"application/xhtml+xml","content_length":"41073","record_id":"<urn:uuid:0b533428-978e-42d3-ac64-d63eb4a64a95>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00191.warc.gz"}
Simulation of Stochastic Processes by Sinc Basis Functions and Application in TELM Analysis | Request PDF Simulation of Stochastic Processes by Sinc Basis Functions and Application in TELM Analysis This study serves three purposes: (i) to review a synthesis formula for simulation of band-limited stochastic processes based on the sinc expansion; (ii) to implement this synthesis formula in the tail-equivalent linearization method (TELM); and (iii) to demonstrate increased computational efficiency when the sinc expansion is implemented in this context. The proposed representation enables the reduction and control of the number of random variables used in the simulation of band-limited stochastic processes. This is of great importance for gradient-based reliability methods, including TELM, for which the computational cost is proportional to the total number of random variables. A direct application of the representation is used in TELM analysis. Examples of single-and multiple-degrees-of-freedom nonlinear systems subjected to Gaussian band-limited white noise simulated by use of sinc expansion are presented. The accuracy and efficiency of the representation are compared with those of the current time-domain discretization method. The analysis concludes by shedding light on the specific cases for which the introduced reduction technique is beneficial. No full-text available To read the full-text of this research, you can request a copy directly from the authors. ... For non-stationary excitations, the gradient forms a time-dependent curve, which is a set but not a subspace. See Der Kiureghian (2000) and Broccardo and Der Kiureghian (2018) for more geometrical insights. ... This paper introduces the evolutionary tail-equivalent linearization method (ETELM) for stochastic dynamic analysis of nonlinear structures subjected to evolutionary input. A sequence of evolutionary tail-equivalent linear systems (ETELS) are defined by linearizing the response function in the space of discretized excitation for a selected threshold and at a sequence of time points. The ETELSs are used to compute the response statistics of interest, including the first-passage probability, in first-order approximation. Numerical examples demonstrate the accuracy of the proposed method. ... By virtue of this equivalence, TELS approximates the exceedance probability of the nonlinear system to the first order. TELM was later extended to processes discretized in the frequency-domain, [17], and further generalized to a broad range of basis functions [18][19][20]. In [21] TELM was applied to spatially correlated excitations for the analysis of multi-support structures, whereas in [22] TELM is extended with the Secant Hyperplane Method, leading to the so-called the Tail Probability Equivalent Method. ... This paper estimates extreme midship wave bending moment of a ship in a specific sea state using First Order Reliability Method (FORM) and the Tail Equivalent Linearization Method (TELM). FORM and TELM are applied to the random vibration analysis of the nonlinear bending moment and results are compared and verified against experimental data obtained from model tests as well as numerical simulations. A case study of an LNG tanker is presented, demonstrating that the proposed methodology yields reasonable results and that the TELM approach provides a fast and computationally efficient way of analyzing nonlinear vertical bending moment of ships in stationary sea states. ... For non-stationary excitations, the gradient forms a time-dependent curve, which is a set but not a subspace. See Der Kiureghian (2000) and Broccardo and Der Kiureghian (2018) for more geometrical insights. The response of the ETELS is ... This study introduces the evolutionary tail-equivalent linearization method (ETELM) for nonlinear stochastic dynamic analysis. The method builds on the recently developed tail-equivalent linearization method (TELM) and it is designed for the class of evolutionary processes. The original TELM employs a tail-equivalent linear system (TELS) by equating the tail probability of a linear system response for a specified threshold to the first-order approximation of the tail probability of the nonlinear system response. For stationary problems, the TELS is time-independent and only one linear system needs to be defined for the specified threshold. However, for a transient input, the TELS is time dependent and an evolutionary tail-equivalent linear system (ETELS) must be defined to study the entire transient response. Algorithms are developed to determine a discrete-time ETELS based on a sequence of linearization points, and a continuous-time approximation based on Priestley’s evolutionary theory. The linearized evolutionary system is used to compute the response statistics of interest, including the first-passage probability, in first-order approximation. Numerical examples demonstrate the accuracy and limitations of the proposed method. ... in which s(t) and u are column vectors of deterministic basis functions (eg, Dirac Delta functions, linear filters, 14 and sinc function 40,41 in the time domain or sine and cosine in the frequency domain 18,27 ) and independent standard normal random variables, respectively. Particularly, in time-domain representation, u may represent the intensity of random pulses at the discretized time points while s(t) describes the linear filter(s) through which the pulses pass. ... Gaussian mixture–based equivalent linearization method (GM‐ELM) is a recently developed stochastic dynamic analysis approach which approximates the random response of a nonlinear structure by collective responses of equivalent linear oscillators. The Gaussian mixture model is employed to achieve an equivalence in terms of the probability density function (PDF) through the superposition of the response PDFs of the equivalent linear system. This new concept of linearization helps achieve a high level of estimation accuracy for nonlinear responses, but has revealed some limitations: (1) dependency of the equivalent linear systems on ground motion intensity and (2) requirements for stationary condition. To overcome these technical challenges and promote applications of GM‐ELM to earthquake engineering practice, an efficient GM‐ELM‐based fragility analysis method is proposed for nonstationary excitations. To this end, this paper develops the concept of universal equivalent linear system that can estimate the stochastic responses for a range of seismic intensities through an intensity‐augmented version of GM‐ELM. Moreover, the GM‐ELM framework is extended to identify equivalent linear oscillators that could capture the temporal average behavior of nonstationary responses. The proposed extensions generalize expressions and philosophies of the existing response combination formulations of GM‐ELM to facilitate efficient fragility analysis for nonstationary excitations. The proposed methods are demonstrated by numerical examples using realistic ground motions, including design code–conforming nonstationary ground motions. The identification of patterns and underlying characteristics of natural or engineering time-varying phenomena poses a challenging task, especially in the scope of simulation models and accompanying stochastic models. Because of their complex nature, time-varying processes such as wind speed, seismic ground motion, or vibrations of machinery in the presence of degradation oftentimes lack a closed-form description of their underlying Evolutionary Power Spectral Density (EPSD) function. To overcome this issue, a wide range of measurements exist for these types of processes. This opens up the path to a data-driven stochastic representation of EPSD functions. Rather than solely relying on time-frequency transform methods like the familiar short-time Fourier transform or wavelet transform for EPSD estimation, a probabilistic representation of the EPSD can provide valuable insights into the epistemic uncertainty associated with these processes. To address this problem, the evolutionary EPSD function is relaxed based on multiple similar data to account for these uncertainties and to provide a realistic representation of the time data in the time-frequency domain. This results is the so-called Relaxed Evolutionary Power Spectral Density (REPSD) function, which serves as a modular probabilistic representation of the time-frequency content of stochastic signals. For this purpose, truncated normal distributions and kernel den-1 REVISED Manuscript (Clean version in word or latex Format) sity estimates are used to determine a probability density function for each time-frequency component. The REPSD function enables the sampling of individual EPSD functions, facilitating their direct application to the simulation model through stochastic simulation techniques like Monte Carlo simulation or other advanced methods. Even though the accuracy is highly dependant on the data available and the time-frequency transformation method used, the REPSD representation offers a stochastic representation of characteristics used to describe stochastic signals and can reduce epistemic uncertainty during the modelling of such time-varying processes. The method is illustrated by numerical examples involving the analysis of dynamic behaviour under random loads. The results show that the method can be successfully employed to account for uncertainties in the estimation of the EPSD function and represent the accuracy of the time-frequency transformation used. In this study, a method for predicting the extreme value distribution of the Vertical Bending Moment (VBM) in a flexible ship under a given short-term sea state is presented. The First Order Reliability Method (FORM) is introduced to evaluate the Probability of Exceedances (PoEs) of extreme VBM levels. The Karhunen-Loeve (KL) representation of stochastic ocean wave is adopted in lieu of the normal wave representation using the trigonometric components, by introducing the Prolate Spheroidal Wave Functions (PSWFs) to formulate the wave elevations. By this means, reduction of the number of stochastic variables to reproduce ocean wave is expected, which in turn the number of computations required during FORM based prediction phases is significantly reduced. In this study, the Reduced Order Model (ROM), which was developed in our previous studies, is used to yield the time-domain VBMs along with the hydroelastic (whipping) component in a ship. Two different short-term sea states, moderate and severe ones, are assumed. The FORM based predictions using PSWF for normal wave-induced VBM are then validated by comparing with those using the normal trigonometric wave representation and Monte Carlo Simulations (MCSs). Through a series of numerical demonstrations, the computational efficiency of the FORM based prediction using PSWF is presented. Then, the validation is extended to the severe sea state where the whipping vibration contributes to the extreme VBM level to a large degree, and finally the conclusions are given. This chapter aims to provide a general prospective of the Tail Equivalent Linearization Method, TELM, by offering a review that starts with the original idea and covers a broad array of developments, including a selection of the most recent developments. The TELM is a linearization method that uses the first-order reliability method (FORM) to define a tail-equivalent linear system (TELS) and estimate the tail of the response distribution for nonlinear systems under stochastic inputs. In comparison with conventional linearization methods, TELM has a superior accuracy in estimating the response distribution in the tail regions; therefore, it is suitable for high reliability problems. Moreover, TELM is a non-parametric method and it does not require the Gaussian assumption of the response. The TELS is numerically defined by a discretized impulse-response function (IRF) or frequency-response function (FRF), thus allowing higher flexibility in linearizing nonlinear structural systems. The first part of the chapter focuses on the original idea inspiring TELM. The second part offers fourth developments of the method, which were studied by the authors of this chapter. These developments include: TELM in frequency domain, TELM with sinc expansion formula, TELM for multi-supported structures, and the secant hyperplane method giving rise to an improved TELM. This dissertation provides the foundation for an in-depth understanding and significant development of the tail-equivalent linearization method (TELM) to solve different classes of nonlinear random vibration problems. The TELM is a linearization method that uses the first-order reliability method (FORM) to define a tail-equivalent linear system (TELS) and to estimate the tail of the response distribution for nonlinear systems under stochastic inputs. The method was originally developed in the time domain for inelastic systems. It was later extended in the frequency domain for a specific class of nonlinear excitations, while the frequency domain version for inelastic systems is covered in the present work. This dissertation mathematically formalizes and extends TELM analysis with different types of discretization of the input process. A general formulation for discrete representation of a Gaussian band-limited, white-noise process is introduced, which employs the sum of deterministic and orthogonal basis functions weighted by random coefficients. The selection of the basis functions completely defines the two types of discretizations used in the earlier works. Specifically, a train of equally spaced time delta-Dirac functions leads to the current time-domain discretization, while harmonic functions with equally spaced frequencies lead to the current frequency-domain discretization. We show that other types of orthogonal basis functions can be used with advantage to represent a Gaussian band-limited white noise and in particular we employ sinc basis functions, which are at the base of the Whittaker-Shannon interpolation formula. We demonstrate that this representation is suitable for reducing the total number of random variables that are necessary to describe the process, since it decouples the computational-time discretization from the band-limit of the process. Next, the dissertation tackles the problem of a nonlinear system subjected to multi- component excitations by defining an augmented standard normal space composed of all the random variables that define the multiple components of the excitation. The tail-equivalent linearization and definition of the TELS is taken in this new space. Once the augmented TELS is defined, response statistics of interest are determined by linear random vibration analysis by superposition of responses due to each component of the excitation. The method is numerically examined for an asymmetric structure with varying eccentricity and subjected to two statistically independent components of excitation. Several practical problems require analysis for non-stationary excitations. For this important class of problems the original TELM requires linearization for a series of points in time to study the evolution of response statistics. This procedure turns out to be computationally onerous. As an approximate alternative, we propose the evolutionary TELM, ETELM. In particular, we adopt the concepts of the evolutionary process theory, to de- fine an evolutionary TELS, ETELS. The ETELS approximately estimates the continuous time evolution of the design point by only one TELM analysis. This is the essence of its efficiency compared to the standard TELM analysis. Among response statistics of interest, the first-passage probability represents the most important one for this class of problems. This statistic is efficiently computed by using the Au-Beck important sampling algorithm, which requires knowledge of the evolving design points, in conjunction with the ETELS. The method is successfully tested for five types of excitation: (I) uniformly modulated white noise, (II) uniformly modulated broad-band excitation, (III) uniformly modulated narrow- band excitation, (IV) time- and frequency-modulated broad-band excitation, and (V) time- and frequency-modulated narrow-band excitation. The tail-equivalent linearization method (TELM) is a recently developed computational method to solve nonlinear stochastic dynamic problems by the first-order reliability method (FORM). TELM employs a tail-equivalent linear system (TELS) by equating the tail probability of a linear system to the first-order approximation of the tail probability of the nonlinear system. For stationary problems, the TELS is time- independent and only one linear system needs to be defined to study the statistics of the response. However, for a transient input, the TELS is time-dependent. Thus, TELSs for different time points must be defined to study the non-stationary response. Since each TELS is obtained from the solution of an optimization problem, the computational cost required to solve the non-stationary problem can be prohibitive. This paper tackles the class of non-stationary problems described via evolutionary power spectral density by defining an evolutionary TELS (ETELS) in place of a series of point-in-time TELSs. An example shows the accuracy and effectiveness of the method. After a brief review of time- and frequency-domain tail-equivalent linearization methods (TELM) for uniform excitation problems, this paper extends TELM for application to nonlinear systems subjected to multisupport seismic excitations. The spatial variability of the ground motion is represented by a coherency function that characterizes the incoherence, wave-passage, and site-response effects. It is found that for multisupport excitation problems, it is most convenient to formulate TELM by using the ground displacement as input. The resulting tail-equivalent linear system (TELS) is defined by frequency-response functions relating the response quantity of interest to each support displacement. A method to reduce the number of random variables in the TELM analysis is introduced. The proposed method is demonstrated through numerical examples with varying structural properties and ground motion coherency in order to investigate various aspects of TELM and the major influences of differential support motions on a nonlinear system. In the analysis of structural reliability, often a sequence of design points associated with a set of thresholds are sought in order to determine the tail distribution of a response quantity. In this paper, after a brief review of methods for determining the design point, an inverse reliability method named the λ-method is introduced for efficiently determining the sequence of design points. The λ-method uses Broyden's "good" method to solve a set of nonlinear simultaneous equations to find the design points for the values of an implicitly defined threshold that is associated with parameter λ. In a special parameter setting, the λ parameter equals the reliability index, thus allowing convenient implementation of the method. Three numerical examples illustrate the accuracy and efficiency of the proposed method. A versatile, nonstationary stochastic ground-motion model accounting for the time variation of both intensity and frequency content typical of real earthquake ground motions is formulated and validated. An extension of the Thomson's spectrum estimation method is used to adaptively estimate the evolutionary power spectral density (PSD) function of the target ground acceleration record. The parameters of this continuous-time, analytical, stochastic earthquake model are determined by least-square fitting the analytical evolutionary PSD function of the model to the target evolutionary PSD function estimated. As application examples, the proposed model is applied to two actual earthquake records. In each case, model validation is obtained by comparing the second-order statistics of several traditional ground-motion parameters and the probabilistic linear-elastic response spectra simulated using the earthquake model with their deterministic counterparts characterizing the target Three methods of stochastic equivalent linearizations defined in the broad framework of structural reliability analysis are presented. These methods are (1) the Gaussian equivalent linearization method (GELM), here defined for the first time as a linear response surface in terms of normal standard random variables; (2) the tail equivalent linearization method (TELM), here reinterpreted as a stochastic critical excitation method; and (3) a novel equivalent linearization called the tail probability equivalent linearization method (TPELM). The Gaussian equivalent linear system (GELS) is the equivalent linear system (ELS) obtained by minimizing the difference between the variance of the GELS and the original nonlinear system. The tail equivalent linear system (TELS) is the ELS having the same critical excitation as the original system. The tail probability equivalent linear system (TPELS) is the ELS obtained by minimizing the difference between the tail probability of the equivalent system and the original nonlinear system. The knowledge of the ELS allows the evaluation of engineering quantities of interest—e.g., first-passage probabilities—through the application of the random vibration analysis to these systems. Shortcomings and advantages of the three methods are presented and illustrated through applications to selected representative nonlinear oscillators. Finally, the methods are applied to an inelastic multi-degree-of-freedom (MDOF) system, showing their scalability to systems of higher complexity. The dynamic analysis of a deepwater floating production systems has many complexities, such as the dynamic coupling between the vessel and the riser, the coupling between the first-order and second-order wave forces, several sources of nonlinearities. These complexities can be captured by fully coupled time domain analyses. Moreover, the sea state is random; hence the need of stochastic dynamic analysis. In this paper the evaluation of the non-Gaussian distributions of the responses of the systems is developed through the well-known First-Order Reliability Method (FORM) and the recently proposed Secant Hyperplane Method (SHM). They give rise to two stochastic Equivalent Linear Systems (ELS) allowing to determine any quantity of engineering interest: the TailEquivalent Linear System (TELS) based on FORM and the Tail Probability Equivalent Linear System (TPELS) based on SHM. The TELS is the Equivalent Linear System (ELS) having the same design point of the original nonlinear system. The Tail Probability Equivalent Linear System (TPELS) is the ELS where the difference in terms of tail probability between the TPELS and the original system is minimized. A simplified 2-degrees-of freedom model is used to demonstrate how these methods can be effective for stochastic dynamic analysis of a marine riser. This chapter aims to provide a general prospective of the Tail Equivalent Linearization Method, TELM, by offering a review that starts with the original idea and covers a broad array of developments, including a selection of the most recent developments. The TELM is a linearization method that uses the first-order reliability method (FORM) to define a tail-equivalent linear system (TELS) and estimate the tail of the response distribution for nonlinear systems under stochastic inputs. In comparison with conventional linearization methods, TELM has a superior accuracy in estimating the response distribution in the tail regions; therefore, it is suitable for high reliability problems. Moreover, TELM is a non-parametric method and it does not require the Gaussian assumption of the response. The TELS is numerically defined by a discretized impulse-response function (IRF) or frequency-response function (FRF), thus allowing higher flexibility in linearizing nonlinear structural systems. The first part of the chapter focuses on the original idea inspiring TELM. The second part offers fourth developments of the method, which were studied by the authors of this chapter. These developments include: TELM in frequency domain, TELM with sinc expansion formula, TELM for multi-supported structures, and the secant hyperplane method giving rise to an improved TELM. A parameterized stochastic model of near-fault ground motion in two orthogonal horizontal directions is developed. The major characteristics of recorded near-fault ground motions are represented. These include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration, and frequency content characteristics; directionality of components; and the natural variability of ground motions. Not all near-fault ground motions contain a forward directivity pulse, even when the conditions for such a pulse are favorable. The proposed model accounts for both pulse-like and non-pulse-like cases. The model is fitted to recorded near-fault ground motions by matching important characteristics, thus generating an ‘observed’ set of model parameters for different earthquake source and site characteristics. A method to generate and post-process synthetic motions for specified model parameters is also presented. Synthetic ground motion time series are generated using fitted parameter values. They are compared with corresponding recorded motions to validate the proposed model and simulation procedure. The use of synthetic motions in addition to or in place of recorded motions is desirable in performance-based earthquake engineering applications, particularly when recorded motions are scarce or when they are unavailable for a specified design scenario. Copyright The method of equivalent linearization of Kryloff and Bogliubov is generalized to the case of nonlinear dynamicsystems with random excitation. The method is applied to a variety of problems, and the results compared with exact solutions of the Fokker?Planck equation for those cases where the Fokker?Planck technique may be applied. Alternate approaches to the problem are discussed, including the characteristic function method of Rice. This paper extends the Tail-Equivalent Linearization Method (TELM) to the case of a nonlinear mechanical system subjected to multiple stochastic excitations. Following the original formulation, the method employs a discrete representation of the stochastic inputs and the first-order reliability method (FORM). Each component of the Gaussian excitation is expressed as a linear function of standard normal random variables. For a specified response threshold of the nonlinear system, the Tail-Equivalent Linear System (TELS) is defined in the standard normal space by matching the design points of the equivalent linear and nonlinear systems. This leads to the identification of the TELS in terms of a frequency-response function or, equivalently, an impulse-response function relative to each component of the input excitation. The method is demonstrated through its application to an asymmetric, one-story building with hysteretic behavior and subjected to bi-component ground motion. The degree of asymmetry is controlled by the eccentricity of the center of stiffness with respect to the center of mass. The correlation between the probability of failure and the degree of asymmetry is studied in detail. The statistics of the response for stationary excitation obtained by TELM are in close agreement with Monte Carlo simulation results. Three different algorithms are presented for simulating a time series which is compatible with a given power spectrum of ocean waves. The first algorithm generates the current value of the time series as the sum of a linear combination of its past values and a white noise deviation. The second algorithm produces the values of the time series as a linear combination of white noise deviation. The third algorithm is a combination of the first and second algorithms. These algorithms are applied to the Pierson-Moskowitz (P-M) spectrum, exclusively. The third algorithm is associated with simple analogue filter approximations of the P-M spectrum. The advantages and disadvantages of each of the three algorithms are discussed in context with their applicability to offshore engineering problems. (A) A new approach for first-order reliability analysis of structures with material parameters modeled as random fields is presented. The random field is represented by a series of orthogonal functions, and is incorporated directly in the finite-element formulation and first-order reliability analysis. This method avoids the difficulty of selecting a suitable mesh for discretizing the random field. A general continuous orthogonal series expansion of the random field is derived, and its relationship with the Karhunen-Loeve expansion used in recent stochastic finite- element studies is examined. The method is illustrated for a fixed-end beam with bending rigidity modeled as a random field. A set of Legendre polynomials is used as the orthogonal base to represent the random field. Two types of correlation models are considered. The Karhunen-Loeve expansion leads to a lower truncation error than does the Legendre expansion for a given number of terms, but one or two additional terms in the Legendre expansion yields almost the same results and avoids some of the computational difficulties associated with the use of the Karhunen-Loeve expansion. A new method for efficient discretization of random fields (i.e., their representation in terms of random variables) is introduced. The efficiency of the discretization is measured by the number of random variables required to represent the field with a specified level of accuracy. The method is based on principles of optimal linear estimation theory. It represents the field as a linear function of nodal random variables and a set of shape functions, which are determined by minimizing an error variance. Further efficiency is achieved by spectral decomposition of the nodal covariance matrix. The new method is found to be more efficient than other existing discretization methods, and more practical than a series expansion method employing the Karhunen-Loeve theorem. The method is particularly useful for stochastic finite element studies involving random media, where there is a need to reduce the number of random variables so that the amount of required computations can be Most of currently used algorithms for numerical generation of sea wave records which are compatible with a specified power spectrum are based on the superposition of several harmonic waves. An alternative method of simulation is presented. The basis of the method is the Linear Prediction Theory (LPT) which has been extensively used in processing digital data in other technical fields. Specifically, records of sea waves which are compatible with the target spectrum are obtained as the output of a recursive digital filter to a white noise input. A procedure for determining the filter parameters is discussed. Several numerical examples are presented. This is a book for engineers about an approximation method for the design of structures under random actions such as wind, waves, earthquakes. This method, statistical linearization, consists in changing the mechanical equation of motion of the structure, which is usually nonlinear, into a linear one which is chosen in such a way that it minimizes the difference to the nonlinear term in the sense of quadratic mean. This method is not rigorous by the fact that this quadratic mean is taken for the probability which governs the linearized response, the only one which is computable, instead of the true response of the structure. It would be a mistake to believe that this point disqualifies the method, on the contrary it is a very convenient and useful tool during the first step of the design process (predimensioning) provided that the user be well warned about its field of application. It is precisely what the authors are doing quite clearly in the first chapter discussing the place of the method among the available procedures for dealing with nonlinear responses of structures to stochastic inputs. The second chapter is a presentation of the mechanical arguments yielding the equations of the motion of structures. An interesting and well organized survey of the different types of nonlinearities is proposed with a discussion of the most convenient mathematical representations including a detailed treatment of hysteresis. The two following chapters give an elementary account of probability theory and second order stationary processes. The explanation of the statistical linearization method begins in chapter 5 by the case of systems of single degree of freedom. Numerous examples are given corresponding to the mechanical classification of nonlinearities precedingly stated. The multivariate case, the most usual for engineers, is then detailed with an emphasis on effective procedures for getting numerically the linearized equations. The case where the coefficients of the linearized equation depend on time allows to reach nonstationary systems. A whole chapter is devoted to the case of systems with hysteretic nonlinearity. The book ends with an analysis of the accuracy of the method obtained by comparison with exact analytical solutions of the nonlinear equations when it is possible or with results of Monte Carlo simulations. This is quite enlightening on the size of the errors which remain small, generally. The point of relaxing the Gaussian hypotheses and extending the method by other closure techniques or by using explicitely solvable nonlinear equations, or by direct optimization on paths of the response, is the subject of chapter 9. It were be worth, when discussing the closure methods by expansion on Hermite polynomials or by truncating the sequence of cumulants to quote that the positivity of the density is not preserved in general by these procedures yielding sometimes nonpositive expectation of positive quantities. This book, which is easy to read and well written, is not only a reference book for engineers but a quite motivating reading for mathematicians interested by improvement of the correctness and research of effective bounds in the asymptotic expansions related to this method or its extensions. The subject of this paper is the simulation of one-dimensional, uni-variate, stationary, Gaussian stochastic processes using the spectral representation method. Following this methodology, sample functions of the stochastic process can be generated with great computational efficiency using a cosine series formula. These sample functions accurately reflect the prescribed probabilistic characteristics of the stochastic process when the number N of the terms in the cosine series is large. The ensemble-averaged power spectral density or autocorrelation function approaches the corresponding target function as the sample size increases. In addition, the generated sample functions possess ergodic characteristics in the sense that the temporally-averaged mean value and the autocorrelation function are identical with the corresponding targets, when the averaging takes place over the fundamental period of the cosine series. The most important property of the simulated stochastic process is that it is asymptotically Gaussian as N -> oo. Another attractive feature of the method is that the cosine series formula can be numerically computed efficiently using the Fast Fourier Transform technique. The main area of application of this method is the Monte Carlo solution of stochastic problems in engineering mechanics and structural engineering. Specifically, the method has been applied to problems involving random loading (random vibration theory) and random material and geometric properties (response, variability due to system stochasticity). The aim of the paper is to advocate effective stochastic procedures, based on the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), for extreme value predictions related to wave and wind-induced loads.Due to the efficient optimization procedures implemented in standard FORM codes and the short duration of the time domain simulations needed (typically 60–300s to cover the hydro- and aerodynamic memory effects in the response) the calculation of the mean out-crossing rates of a given response is fast. Thus non-linear effects can be included. Furthermore, the FORM analysis also identifies the most probable wave episodes leading to given responses.Because of the linearization of the failure surface in the FORM procedure the results are only asymptotically exact and thus MCS often also needs to be performed. In the present paper a scaling property inherent in the FORM procedure is investigated for use in MCS in order to reduce the necessary simulation time. Thereby uniform accuracy for all exceedance levels can be achieved by a modest computational effort, even for complex non-linear models. As an example extreme responses of a floating offshore wind turbine are analyzed taking into consideration both stochastic wave and wind-induced loads. This book is designed for use as a text for graduate courses in random vibrations or stochastic structural dynamics, such as might be offered in departments of civil engineering, mechanical engineering, aerospace engineering, ocean engineering, and applied mechanics. It is also intended for use as a reference for graduate students and practicing engineers with a similar level of preparation. The focus is on the determination of response levels for dynamical systems excited by forces that can be modeled as stochastic processes. The choice of prerequisites, as well as the demands of brevity, sometimes makes it necessary to omit mathematical proofs of results. We do always try to give mathematically rigorous definitions and results even when mathematical details are omitted. This approach is particularly important for the reader who wishes to pursue further study. An important part of the book is the inclusion of a number of worked examples that illustrate the modeling of physical problems as well as the proper application of theoretical solutions. Similar problems are also presented as exercises to be solved by the reader. The subject of this paper is the simulation of multi-dimensional, homogeneous, Gaussian stochastic fields using the spectral representation method. Following this methodology, sample functions of the stochastic field can be generated using a cosine series formula. These sample functions accurately reflect the prescribed probabilistic characteristics of the stochastic field when the number of terms in the cosine series is large. The ensemble-averaged power spectral density or autocorrelation function approaches the corresponding target function as the sample size increases. In addition, the generated sample functions possess ergodic characteristics in the sense that the spatially-averaged mean value, autocorrelation function and power spectral density function are identical with the corresponding targets, when the averaging takes place over the multi-dimensional domain associated with the fundamental period of the cosine series. Another property of the simulated stochastic field is that it is asymptotically Gaussian as the number of terms in the cosine series approaches infinity. The most important feature of the method is that the cosine series formula can be numerically computed very efficiently using the Fast Fourier Transform technique. The main area of application of this method is the Monte Carlo solution of stochastic problems in structural engineering, engineering mechanics and physics. Specifically, the method has been applied to problems involving random loading (random vibration theory) and random material and geometric properties (response variability due to system stochasticity). Two models and are considered for generating samples of stationary band-limited Gaussian processes. The models are based on the spectral representation method and consist of a superposition of n harmonics. The harmonics of have random phase and amplitude while the harmonics of only have random phase. It is shown that the two models are equal in the second-moment sense. However, has stronger ergodic properties than . On the other hand, is a Gaussian process for any value of n while is asymptotically Gaussian as n approaches infinity. It is demonstrated that the rejection of , because of its weak ergodic property, or of model , because of its nonGaussian distribution, it is not generally justified. One special case in which should not be used is that of Gaussian processes with power concentrated at a few discrete frequencies. Based on a Markov-vector formulation and a Galerkin solution procedure, a new method of modeling and solution of a large class of hysteretic systems (softening or hardening, narrow or wide-band) under random excitation is proposed. The excitation is modeled as a filtered Gaussian shot noise allowing one to take the nonstationarity and spectral content of the excitation into consideration. The solutions include time histories of joint density, moments of all order, and threshold crossing rate; for the stationary case, autocorrelation, spectral density, and first passage time probability are also obtained. Comparison of results of numerical example with Monte-Carlo solutions indicates that the proposed method is a powerful and efficient tool. S ummary We develop an approach to the spectral analysis of non‐stationary processes which is based on the concept of “evolutionary spectra”; that is, spectral functions which are time dependent, and have a physical interpretation as local energy distributions over frequency. It is shown that the notion of evolutionary spectra generalizes the usual definition of spectra for stationary processes, and that, under certain conditions, the evolutionary spectrum at each instant of time may be estimated from a single realization of a process. By such means it is possible to study processes with continuously changing “spectral patterns”. The Karhunen-Loeve, spectral, and sampling representations, referred to as the KL, SP, and SA representations, are defined and some features/limitations of KL-, SP, and SA-based approximations commonly used in applications are stated. Three test applications are used to evaluate these approximate representations. The test applications include (1) models for non-Gaussian processes; (2) Monte Carlo algorithms for generating samples of Gaussian and non-Gaussian processes; and (3) approximate solutions for random vibration problems with deterministic and uncertain system parameters. Conditions are established for the convergence of the solutions of some random vibration problems corresponding to KL, SP, and SA approximate representations of the input to these problems. It is also shown that the KL and SP representations coincide for weakly stationary processes. In this paper, a simulation methodology is proposed to generate sample functions of a stationary, non-Gaussian stochastic process with prescribed spectral density function and prescribed marginal probability distribution. The proposed methodology is a modified version of the Yamazaki and Shinozuka iterative algorithm that has certain difficulties matching the prescribed marginal probability distribution. Although these difficulties are usually sufficiently small when simulating non-Gaussian stochastic processes with slightly skewed marginal probability distributions, they become more pronounced for highly skewed probability distributions (especially at the tails of such distributions). Two major modifications are introduced in the original Yamazaki and Shinozuka iterative algorithm to ensure a practically perfect match of the prescribed marginal probability distribution regardless of the skewness of the distribution considered. First, since the underlying "Gaussian" stochastic process from which the desired non-Gaussian process is obtained as a translation process becomes non-Gaussian after the first iteration, the empirical (non-Gaussian) marginal probability distribution of the underlying stochastic process is calculated at each iteration. This empirical non-Gaussian distribution is then instead of the Gaussian to perform the nonlinear mapping of the underlying stochastic process to the desired non-Gaussian process. This modification ensures that at the end of the iterative scheme every generated non-Gaussian sample function will have exact prescribed non-Gaussian marginal probability distribution. Second, before the start of the iteractive scheme, a procedure named "spectral preconditioning" is carried out to check the compatibility between the prescribed spectral density function and prescribed marginal probability distribution. If these two quantities are found to be incompatible, then the spectral density function can be slightly modified to make it compatible with the prescribed marginal probability distribution. Finally, numerical examples (including a stochastic process with a highly skewed marginal probability distribution) are provided to demonstrate the capabilities of the proposed algorithm. Probability density functions, mean crossing rates, and other descriptors are developed for the quasi-static and dynamic responses of offshore platforms to wave forces. It is assumed that offshore platforms can be modeled by simple oscillators, the wave particle velocity is a stationary differentiable Gaussian process, Morison's equation can be applied, and wave forces are perfectly correlated spatially. Results show that both the quasi-static response and the dynamic response of offshore platforms to wave forces are generally underestimated if the drag force is linearized. Estimates are developed for probabilistic characterictics of these responses based on the crossing theory of random processes and time-discretization of the wave force process. Simulation studies indicate that these estimates are satisfactory An extension of the Tail-Equivalent Linearization Method (TELM) to the frequency domain is presented. The extension defines the Tail-Equivalent Linear System in terms of its frequency-response function. This function is obtained by matching the design point of the nonlinear response with that of the linearized response, thus guaranteeing the equivalence of the tail probability of the latter and the first-order approximation of the tail probability of the nonlinear response. The proposed approach is particularly suitable when the input and response processes are stationary, as is usually the case in the analysis of marine structures. When linear waves are considered, the Tail-Equivalent Linear System possesses a number of important properties, such as the capability to account for multi-support excitations and invariance with respect to scaling of the excitation. The latter property significantly enhances the computational efficiency of TELM for analysis with variable sea states. Additionally, the frequency-response function of the Tail-Equivalent Linear System offers insights into the geometry of random vibrations discretized in the frequency domain and into the physical nature of the response process. The proposed approach is applied to the analysis of point-in-time and first-passage statistics of the random sway displacement of a simplified jack-up rig model. The geometry of random vibration problems in the space of standard normal random variables obtained from discretization of the input process is investigated. For linear systems subjected to Gaussian excitation, the problems of interest are characterized by simple geometric forms, such as vectors, planes, half spaces, wedges and ellipsoids. For non-Gaussian responses, the problems of interest are generally characterized by non-linear geometric forms. Approximate solutions for such problems are obtained by use of the first- and second-order reliability methods (FORM and SORM). This article offers a new outlook to random vibration problems and an approximate method for their solution. Examples involving response to non-Gaussian excitation and out-crossing of a vector process from a non-linear domain are used to demonstrate the approach. A gradient-free method is developed for finding the design point in nonlinear stochastic dynamic analysis, where the input excitation is discretized into a large number of random variables. This point defines the realization of the excitation that is most likely to give rise to a specific response threshold at a given time. The design point is the essential information in the recently developed tail-equivalent linearization method. The proposed approach employs a variant of the model correction factor method developed by O. Ditlevsen, which is further improved by the use of a novel response surface technique. Example applications to single- and multi-degree-of-freedom hysteretic systems demonstrate the efficiency and accuracy of the method. This book addresses random vibration of mechanical and structural systems commonly encountered in aerospace, mechanical, and civil engineering. Techniques are examined for determining probabilistic characteristics of the response of dynamic systems subjected to random loads or inputs and for calculating probabilities related to system performance or reliability. Emphasis is given to SUMMARYA method is presented for simulating arrays of spatially varying ground motions, incorporating the effects of incoherence, wave passage, and differential site response. Non-stationarity is accounted for by considering the motions as consisting of stationary segments. Two approaches are developed. In the first, simulated motions are consistent with the power spectral densities of a segmented recorded motion and are characterized by uniform variability at all locations. Uniform variability in the array of ground motions is essential when synthetic motions are used for statistical analysis of the response of multiply-supported structures. In the second approach, simulated motions are conditioned on the segmented record itself and exhibit increasing variance with distance from the site of the observation. For both approaches, example simulated motions are presented for an existing bridge model employing two alternatives for modeling the local soil response: i) idealizing each soil-column as a single-degree-of-freedom oscillator, and ii) employing the theory of vertical wave propagation in a single soil layer over bedrock. The selection of parameters in the simulation procedure and their effects on the characteristics of the generated motions are discussed. The method is validated by comparing statistical characteristics of the synthetic motions with target theoretical models. Response spectra of the simulated motions at each support are also examined. Copyright © 2011 John Wiley & Sons, Ltd. A new approach for computing seismic fragility curves for nonlinear structures for use in performance-based earthquake engineering analysis is proposed. The approach makes use of a recently developed method for nonlinear stochastic dynamic analysis by tail-equivalent linearization. The ground motion is modeled as a discretized stochastic process with a set of parameters that characterizes its evolutionary intensity and frequency content. For each selected response (seismic demand) threshold, a linear system is defined that has the same tail probability as the nonlinear response in first-order approximation. Simple linear random vibration analysis with the tail-equivalent linear system then yields the fragility curve. The approach has the advantage of avoiding the selection and scaling of recorded accelerograms and repeated time-history analyses, which is the current practice for developing fragility curves. Copyright © 2009 John Wiley & Sons, Ltd. A fully nonstationary stochastic model for strong earthquake ground motion is developed. The model employs filtering of a discretized white-noise process. Nonstationarity is achieved by modulating the intensity and varying the filter properties in time. The formulation has the important advantage of separating the temporal and spectral nonstationary characteristics of the process, thereby allowing flexibility and ease in modeling and parameter estimation. The model is fitted to target ground motions by matching a set of statistical characteristics, including the mean-square intensity, the cumulative mean number of zero-level up-crossings and a measure of the bandwidth, all expressed as functions of time. Post-processing by a second filter assures zero residual velocity and displacement, and improves the match to response spectral ordinates for long periods. Copyright © 2008 John Wiley & Sons, Ltd. A random process can be represented as a series expansion involving a complete set of deterministic functions with corresponding random coefficients. Karhunen–Loeve (K–L) series expansion is based on the eigen-decomposition of the covariance function. Its applicability as a simulation tool for both stationary and non-stationary Gaussian random processes is examined numerically in this paper. The study is based on five common covariance models. The convergence and accuracy of the K–L expansion are investigated by comparing the second-order statistics of the simulated random process with that of the target process. It is shown that the factors affecting convergence are: (a) ratio of the length of the process over correlation parameter, (b) form of the covariance function, and (c) method of solving for the eigen-solutions of the covariance function (namely, analytical or numerical). Comparison with the established and commonly used spectral representation method is made. K–L expansion has an edge over the spectral method for highly correlated processes. For long stationary processes, the spectral method is generally more efficient as the K–L expansion method requires substantial computational effort to solve the integral equation. The main advantage of the K–L expansion method is that it can be easily generalized to simulate non-stationary processes with little additional effort. Copyright © 2001 John Wiley & Sons, Ltd. A method for generating a suite of synthetic ground motion time-histories for specified earthquake and site characteristics defining a design scenario is presented. The method employs a parameterized stochastic model that is based on a modulated, filtered white-noise process. The model parameters characterize the evolving intensity, predominant frequency, and bandwidth of the acceleration time-history, and can be identified by matching the statistics of the model to the statistics of a target-recorded accelerogram. Sample ‘observations’ of the parameters are obtained by fitting the model to a subset of the NGA database for far-field strong ground motion records on firm ground. Using this sample, predictive equations are developed for the model parameters in terms of the faulting mechanism, earthquake magnitude, source-to-site distance, and the site shear-wave velocity. For any specified set of these earthquake and site characteristics, sets of the model parameters are generated, which are in turn used in the stochastic model to generate the ensemble of synthetic ground motions. The resulting synthetic acceleration as well as corresponding velocity and displacement time-histories capture the main features of real earthquake ground motions, including the intensity, duration, spectral content, and peak values. Furthermore, the statistics of their resulting elastic response spectra closely agree with both the median and the variability of response spectra of recorded ground motions, as reflected in the existing prediction equations based on the NGA database. The proposed method can be used in seismic design and analysis in conjunction with or instead of recorded ground motions. Copyright © 2010 John Wiley & Sons, Ltd. In on-board decision support systems, efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given online information on the sea state and user-defined ranges of possible headings and speeds. For linear responses, standard frequency domain methods can be applied. For non-linear responses, as exhibited by the roll motion, standard methods such as direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using the first-order reliability method (FORM), which is well known from structural reliability problems. To illustrate the proposed procedure, the roll motion was modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments, and the heave acceleration. Resonance excitation, parametric roll, and forced roll were all included in the model, albeit with some simplifications. The result is the mean out-crossing rate of the roll angle together with the most probable wave scenarios (critical wave episodes), leading to user-specified specific maximum roll angles. The procedure is computationally very effective and can thus be applied to real-time determination of ship-specific combinations of heading and speed to be avoided in the actual sea state. A new, non-parametric linearization method for nonlinear random vibration analysis is developed. The method employs a discrete representation of the stochastic excitation and concepts from the first-order reliability method, FORM. For a specified response threshold of the nonlinear system, the equivalent linear system is defined by matching the “design points” of the linear and nonlinear responses in the space of the standard normal random variables obtained from the discretization of the excitation. Due to this definition, the tail probability of the linear system is equal to the first-order approximation of the tail probability of the nonlinear system, this property motivating the name Tail-Equivalent Linearization Method (TELM). It is shown that the equivalent linear system is uniquely determined in terms of its impulse response function in a non-parametric form from the knowledge of the design point. The paper examines the influences of various parameters on the tail-equivalent linear system, presents an algorithm for finding the needed sequence of design points, and describes methods for determining various statistics of the nonlinear response, such as the probability distribution, the mean level-crossing rate and the first-passage probability. Applications to single- and multi-degree-of-freedom, non-degrading hysteretic systems illustrate various features of the method, and comparisons with results obtained by Monte Carlo simulations and by the conventional equivalent linearization method (ELM) demonstrate the superior accuracy of TELM over ELM, particularly for high response thresholds. It has been shown in recent years that certain non-linear random vibration problems can be solved by well established methods of time-invariant structural reliability, such as FORM and importance sampling. A key step in this approach is finding the design-point excitation, which is that realization of the input process that is most likely to give rise to the event of interest. It is shown in this paper that for a non-linear, elastic single-degree-of-freedom oscillator subjected to white noise, the design-point excitation is identical to the excitation that generates the mirror image of the free-vibration response when the oscillator is released from a target threshold. This allows determining the design-point excitation with a single non-linear dynamic analysis. With a slight modification, this idea is extended to non-white and non-stationary excitations and to hysteretic oscillators. In these cases, an approximate solution of the design-point excitation is obtained, which, if necessary, can be used as a ‘warm’ starting point to find the exact design point using an iterative optimization algorithm. The paper also offers a simple method for computing the mean out-crossing rate of a response process. Several examples are provided to demonstrate the application and accuracy of the proposed methods. The methods proposed in this paper enhance the feasibility of approximately solving non-linear random vibration problems by use of time-invariant structural reliability techniques. An efficient procedure for the derivation of mean outcrossing rates for non-linear wave-induced responses in stationary sea states is presented and applied to an analysis of the horizontal deck sway of a jack-up unit. The procedure is based on the theory of random vibrations and uses the first order reliability method (FORM) to estimate the most probable set of wave components in the ocean wave system that will lead to exceedance of a specific response level together with the associated mean outcrossing rate. The procedure bears some resemblance to the Constrained NewWave methodology, but is conceptually simpler and makes efficient use of the optimisation procedures implemented in standard FORM software codes.Due to the fast calculation procedure the analysis can be carried out taking into account all relevant non-linear effects. Specifically, the present analysis accounts for second order stochastic waves, not previously included in the analysis of jack-up units in stochastic This paper is concerned with the time domain simulation of the second order motions of a moored vessel when the random seastate is represented as a sum of harmonic components. It is known that in these circumstances successive runs of a simulation program produce different results for the statistical moments of the response. Here, the variation of the first four statistical moments of the response over an ensemble of program runs is investigated, leading to an assessment of the likely accuracy of these quantities as predicted by a limited number of program runs. Also, it is shown that an approximate simulation method which uses deterministic wave amplitudes and random phase angles does not correctly predict the fourth moment of the response. An efficient algorithm to simulate turbulent, atmospheric or wind tunnel generated wind fields is devised. The method is based on a model of the spectral tensor for atmospheric surface-layer turbulence at high wind speeds and can simulate two- or three-dimensional fields of one, two or three components of the wind velocity fluctuations. The spectral tensor is compared with and adjusted to several spectral models commonly used in wind engineering. Compared to the Sandia method (see Veers, P. S., Three-dimensional wind simulation. Technical Report SAND88-0152, Sandia National Laboratories, 1988) the algorithm is more efficient, simpler to implement, and in some respects more physical. The simulation method is currently used for load calculations on wind turbines and In this paper we present a review of stochastic process models proposed for the simulation of seismic ground motion. The models reviewed include those based on filtered white noise processes, filtered Poisson processes, spectral representation of stochastic processes, and finally those based on stochastic wave theory. Mathematical expressions are provided for all models along with comments on their usefulness, advantages and disadvantages.Together with the review of auto-regressive moving-average models by F. Kozin in this PEM review series on earthquake engineering (June issue), this paper represents an overview of stochastic models of earthquake ground motion, which is hopefully of some use to researchers as well as practitioners. Several optimization algorithms are evaluated for application in structural reliability, where the minimum distance from the origin to the limit-state surface in the standard normal space is required. The objective is to determine the suitability of the algorithms for application to linear and nonlinear finite element reliability problems. After a brief review, five methods are compared through four numerical examples. Comparison criteria are the generality, robustness, efficiency, and capacity of each method.
{"url":"https://www.researchgate.net/publication/317904705_Simulation_of_Stochastic_Processes_by_Sinc_Basis_Functions_and_Application_in_TELM_Analysis","timestamp":"2024-11-11T20:51:03Z","content_type":"text/html","content_length":"674677","record_id":"<urn:uuid:7b08b5da-a9f3-4d6b-8c56-21ab6128aa88>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00010.warc.gz"}
How to draw any regular shape with just one JavaScript function | MDN Blog How to draw any regular shape with just one JavaScript function Ok alright, I know I used a clickbait title, but bear with me. I want to share a function I've been using for ages. I originally wrote it to draw a hexagon – hexagons are cool, lots of hexagons are better, and tessellated hexagons are the best. So I wrote a function to draw one, which I could then repeat. As I was doing so, I started modifying the hexagon to enable drawing a number of shapes that are based on just a couple of parameters. Let's begin with what I did with the hexagon and we'll take it from there. Hexagons have six equal sides. If we imagine our starting point as the center of the hexagon, we can move around this point six times, joining each point as we go to make the sides. Let's start off by creating a <canvas> with a 2d drawing context. We'll fix the size of the canvas to 400 x 200 pixels for this example and set the center point as (200, 100). const canvas = document.querySelector("canvas"); canvas.width = 400; canvas.height = 200; const ctx = canvas.getContext("2d"); const cx = 200; const cy = 100; Now we need to figure out the x (horizontal) and y (vertical) position of points around the center, which when joined with a line, will make six equal sides. For this, we use the measurement from the center to the point (we'll call this the radius) and the angle of direction from the center. As there are 360 degrees in a full rotation and six points we want to create, we can divide 360/6 and know we'll make a point every 60 degrees. However, there's a tiny caveat to this – JavaScript works with radians rather than degrees. One thing I always remember is that the value pi in radians is 180 degrees, or half a circle. So (Math.PI*2)/6 would give us each rotation in radians or even simpler Math.PI/3. Next we need to add a bit of trigonometry to find the x and y position of each point. For the x position, we can use the sum radius multiplied by cos(angle) and for the y position radius multiplied by sin(angle). Let's put it all together, adding to our JavaScript code above: // set the radius of the hexagon const radius = 50; // move the canvas to the center position ctx.translate(cx, cy); for (let i = 0; i < 6; i++) { // calculate the rotation const rotation = (Math.PI / 3) * i; // for the first point move to if (i === 0) { ctx.moveTo(radius * Math.cos(rotation), radius * Math.sin(rotation)); } else { // for the rest draw a line ctx.lineTo(radius * Math.cos(rotation), radius * Math.sin(rotation)); // close path and stroke it Let's say we wanted to draw a triangle, a square, or an octagon. All we would need to modify in the above function, used to draw the hexagon, is the number of times we draw lines in our for loop and the angle for each point. Let's turn this into a function that takes the center point, the radius, and number of sides as parameters: function drawShape(x, y, r, sides) { // move the canvas to the center position ctx.translate(x, y); for (let i = 0; i < sides; i++) { // calculate the rotation const rotation = ((Math.PI * 2) / sides) * i; // for the first point move to if (i === 0) { ctx.moveTo(r * Math.cos(rotation), r * Math.sin(rotation)); } else { // for the rest draw a line ctx.lineTo(r * Math.cos(rotation), r * Math.sin(rotation)); // close path and stroke it // reset the translate position Now we can draw different shapes by adjusting the sides parameter: drawShape(100, 100, 50, 3); drawShape(225, 100, 50, 7); drawShape(350, 100, 50, 4); This was a little introduction to the <canvas> element for drawing on a web page and a few of the methods you can use to draw shapes. If you want to dive deeper into how all the pieces work, here's a recap of what we used: To calculate the position of each point, we used a little bit of maths and trigonometry: • Math.cos() to calculate the x position of a point • Math.sin() to calculate the y position of a point • Math.PI to calculate the angle of rotation in radians To get some more inspiration for what you can do with the <canvas> element, check out the Canvas tutorial that starts off with the basics and then covers more advanced topics like animation and pixel There are plenty of ways you can expand on this basic shape function. I like to include an inner radius, so you can create diamonds and stars. I've also experimented a little with curves instead of straight lines - feel free to experiment for yourself. Or try some tessellation, which is always fun! Let me know if you try out this function and if you like it as much as I do. As always, feel free to leave any feedback on the GitHub discussion or join us for a chat in the MDN Web Docs Discord
{"url":"https://developer.mozilla.org/en-US/blog/javascript-shape-drawing-function/","timestamp":"2024-11-13T02:29:01Z","content_type":"text/html","content_length":"51816","record_id":"<urn:uuid:183319ee-befe-46ee-87ee-7ae5c82a7894>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00251.warc.gz"}
Ephemerides of the True Light Moon by Albert Timashev ...not translated yet... What is the Light Moon? According to the Zurvanit's texts, Arta (the Light Moon) is the point of the Eternity or the Undivided Time named Zurvan Akarana, which unites three finite times of Zurvan Karana (Past, Present and Future) in one point. Three finite times are the Lunar Nodes and the Dark Moon. The Past Time is the Descending Node of the Moon, the Present Time is the Dark Moon (the lunar orbit's apogee) and the Future Time is the Ascending Node of the Moon. Such a property as symmetry or reflection, as we know, marks a state of Eternity. The aspect of opposition expresses this mirror resemblance in the circle of the Zodiac. Obviously, a symmetrical picture can't be composed with three points. But it is important that the Lunar Nodes are the crossings of the lunar orbit and the ecliptic, i.e. solar orbit around the Earth, if we take the Earth as the fixed center. So we have to look for a missing link at the solar orbit. Let's examine now the solar orbit relatively to the lunar orbit. In this case the Ascending Solar Node will coincide with the Descending Lunar Node and the Descending Solar Node with the Ascending Node of the Moon. Thus the Past and the Future change over for the Lunar and Solar orbits, i.e. they are symmetrical as a mirror image. Therefore the symmetry of the lunar and solar orbits' moments of the Present in necessary and sufficient for reaching the condition of Eternity or Akarana. By analogy with the lunar orbit the point of the Present for the Sun is the solar orbit's apogee which can be marked as Gates of Eternity are opening and the way to Akarana is getting possible. Now we should clear up how are these gates opening and what are these gates themselves. One among the symbols of the Eternity is a cross, the main attribute of the Christianity. Zoroastrians used a cross for the omage of Vara (a protective system, in which the Time was eternal). Therefore a cross (or the configuration of the Big Cross) is is a model of Vara, which had four enters at corners. Thus it must be four Gates of the Eternity in the shape of cross. Who can open these Gates? These gates, we know, open just for a man with righteous past. I.e. either the Descending Lunar Node or the Descending Solar Node (which is Ascending Lunar one) can open the Gates of Eternity only. I.e., every time, when the Dark Moon is in opposition to the solar apogee (diag. 1), one of the Points of the Eternity must coincide with one of the Nodes. Diag. 1 That means, if the Light Moon really exists, we have got a right to declare that the point named the Light Moon above in fact is just one out of four points of the cross, making its complete revolutions clockwise for the peroid of time about 7 years. the Lunar Nodes pass the half of the circle during one revolution of the Dark Moon Let's attract our attention to that this formula gives the Light Moon's Now we can divert from the Light Moon and the Cross Situation (Big Cross configuration) for a while. It's time to consider a few very interesting minor planets, because one of the evidences of correctness for all our argumentation set forth above is the scientific fact of the existence of these three asteroids: │ 279 │ Thule │ the cycle 8.85 years │ │ 522 │ Helga │ the cycle 6 years 11 months 6 days │ │ │ │ or 6.92 years │ │ 2311 │ El Leoncito │ the cycle 6 years 11 months 6 days │ │ │ │ or 6.92 years │ And last two asteroids move practically in opposition to each other! It's very interesting to compare some empirical ephemerides of "Selena 6.11.5" (one of early versions of empirical Light Moon with its cycle of 6 years 11 months and 5 days, which was spreading for the time being in xerox copies and later was published) with the ephemerides of the 2311 El Leoncito asteroid. The fact is that this "empirical" ephemerides really is the mean (without loops) ephemerides of this asteroid! Of course, this is a coincidence. But what a coincidence! Also, we have no other known asteroids with cycles so close to 9 (8.85) and 7 (6.95) years but these three asteroids. And what is really amazing, these asteroids have their angles of declination and eccentricities close to the same parameters of the lunar orbit! Obviously, these asteroids play the same role in our solar system as the Dark and Light Moons (the Gates of Vara) for the Earth. Besides, we can foretell an existence of two more asteroids with cycles about 6.92 years, which have to lock up the whole cross situation, if, of course, they were not ruined in one of cosmic disasters disturbing our solar system from time to time. Let's come back to the Light Moon now. If the Light Moon makes 7/4 of a revolution relatively to the Nodes (which make 1/2 of a revolution themselves) for the period pf one Dark Moon's revolution relatively to the solar apogee, the Light Moon makes 7/4 - 1/2 = 5/4 of a revolution relatively to 0 Aries. I.e. every 9 years, when the Dark Moon enters its opposition to the solar apogee, the Light Moon outruns its own previous position for 90 degrees approximately. Thereby we can conditionally tell that it moves clockwise within its microcycle. Now let's note that the most powerful interaction between the solar and lunar apogees has to happen in a moment of their complete dimensional opposition, i.e. when their latitudes are opposite each other as well as their longitudes. The solar apogee, of course, has zero ecliptic latitude (placed at the solar orbit). So, such an event is possible when the lunar apogee has zero latitude too, or when the lunar apogee coincides with one of the Nodes. Thus the Gates of Eternity are open with their maximum breadth in a moment of the Dark Moon's conjunction with one of the Nodes in opposition to the solar apogee. This event, as we can calculate, takes place every 186 years. Last time it happend on August 19, 1991... Then the Light Moon's macrocycle is 186 or, exactly, 185.87676 years. And the Light Moon lags from its previous position for 90 degrees approximately each 186 years. Thus we can conditionally tell that it moves counter-clockwise within its macrocycle. There is a little discrepancy in the cylces, thus some fault takes place in these recurrences ones 910 years in average, and the moment of maximum resonance comes one cycle of Dark Moon earlier, i.e. not in 186 years after previous one, but in 177 years, in 177.0255 to be precise. The reason of this fault is some outrunning of the Moon's Nodes. The Nodes outrun their previous position each 186 years for 1 degree 46 minutes relatively to the line "lunar apogee - solar apogee". And if we calculate the cycle of the Nodes' "movement" in this macrocycle, it will be 185.87676360/1°46' = 37848.623 years, what precisely is one and a half a cycle of precession. Thereby we see that the Light Moon's cycle is associated indeed with the cycle of the earth's axis's movement, as it was declared repeatedly by Pavel Globa, the astrologer who is well-known in Russia as the head of the Avestan School of Astrology (AShA). Obviously, the Light Moon is a corrector of the system "Sun - Earth - Moon", and it let a process of developing release our system and open the Way to Eternity. A physical meaning of the Gates of Eternity should be consisted in a mutual resonance of the lunar and solar orbits through the time, and it can be substantiated just by means of development of the Time Theory established by the great Russian scientist Professor N.A.Kozyrev, a researcher of the Pulkov Observatory. An Astrological Interpretation and Meanings of the Light Moon Let's take a look now an astrological interpretation and meanings of the Light Moon. According to the orthodox attitude, as it was told above, the Light Moon is an index of our positive karma, a point of Truth, Good and Light, a point of manifestation of our Guardian Angel. Is it so correct? ...not translated yet... The Difference Between Mean and True Points ...not translated yet... The Precision of This Ephemerides ...not translated yet... The Explanations to the Ephemerides ...not translated yet... The Ephemerides of the Mean Light Moon (Selena) International Astronomical Union IAU standard expansions for the epoch J2000.0. The Author is grateful to Myles Standish from the NASA Jet Propulsion Laboratory (JPL) for support and understanding. The History of Discovery... 46716 visitors from March 11, 1998 12 of them on this week, including 4 today
{"url":"https://astrologer.ru/book/light_moon/index.html.en","timestamp":"2024-11-13T14:33:39Z","content_type":"text/html","content_length":"38498","record_id":"<urn:uuid:6a852cad-f4fc-42e4-b10c-400d5489b62c>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00561.warc.gz"}
Block rational Krylov methods for matrix equations and matrix functions In this thesis, we describe block rational Krylov subspaces, highlighting their correspondence with rational matrices and generalizing important properties that apply to the non-block case. We develop procedures based on block rational Krylov subspaces for solving Sylvester and tensor Sylvester equations, computing functions of Hermitian hierarchically semiseparable (HSS) matrices, and applying functions of Hermitian matrices to block vectors.In the context of Sylvester equations with low-rank right-hand sides, we devise a new formulation of the residual obtained when projection methods based on block rational Krylov subspaces are utilized. This formulation allows the development of various adaptive pole selection strategies, offering a theoretical analysis of established heuristics as well as effective novel techniques.A natural extension is represented by tensor Sylvester equations, involving $d$ summands and $d$-dimensional tensors for both the unknown and the right-hand side. Methods based on projection onto Krylov subspaces typically assume a right-hand side of rank one. In this thesis, we demonstrate how to apply block rational Krylov methods to handle right-hand sides with low multilinear or tensor train rank. By extending the results established for Sylvester equations, we present a new formulation of the residual and several adaptive pole selection strategies for the tensor case as well.In the context of matrix functions, we utilize block rational Krylov methods in an unconventional manner. Our aim is to leverage the rapid convergence of block rational Krylov subspaces while circumventing costly operations such as solving large linear systems.We present a fast algorithm designed to approximate $f(A)$, where $A$ represents a Hermitian HSS matrix. We use a telescopic decomposition of $A$, inspired by the recent work of Levitt and Martinsson, allowing us to approximate $f(A)$ by recursively performing low-rank updates with block rational Krylov subspaces while keeping the size of the matrices involved in the rational Krylov subspaces small. In particular, no large-scale linear system needs to be solved, yielding favorable complexity estimates and reduced execution times compared to existing methods.Finally, we develop a memory-efficient algorithm for computing $f(A)C$, where $A$ is a Hermitian matrix (not necessarily HSS), and $C$ is a block vector. Our approach combines a block Lanczos algorithm with a basis compression technique based on block rational Krylov subspaces involving only small matrices, enabling us to avoid storing the entire Lanczos basis, resulting in significant reductions in memory usage. Theoretical results demonstrate that, for a wide variety of functions, the proposed algorithm differs from block Lanczos by an error term that is typically negligible. Block rational Krylov methods for matrix equations and matrix functions In this thesis, we describe block rational Krylov subspaces, highlighting their correspondence with rational matrices and generalizing important properties that apply to the non-block case. We develop procedures based on block rational Krylov subspaces for solving Sylvester and tensor Sylvester equations, computing functions of Hermitian hierarchically semiseparable (HSS) matrices, and applying functions of Hermitian matrices to block vectors.In the context of Sylvester equations with low-rank right-hand sides, we devise a new formulation of the residual obtained when projection methods based on block rational Krylov subspaces are utilized. This formulation allows the development of various adaptive pole selection strategies, offering a theoretical analysis of established heuristics as well as effective novel techniques.A natural extension is represented by tensor Sylvester equations, involving $d$ summands and $d$-dimensional tensors for both the unknown and the right-hand side. Methods based on projection onto Krylov subspaces typically assume a right-hand side of rank one. In this thesis, we demonstrate how to apply block rational Krylov methods to handle right-hand sides with low multilinear or tensor train rank. By extending the results established for Sylvester equations, we present a new formulation of the residual and several adaptive pole selection strategies for the tensor case as well.In the context of matrix functions, we utilize block rational Krylov methods in an unconventional manner. Our aim is to leverage the rapid convergence of block rational Krylov subspaces while circumventing costly operations such as solving large linear systems.We present a fast algorithm designed to approximate $f(A)$, where $A$ represents a Hermitian HSS matrix. We use a telescopic decomposition of $A$, inspired by the recent work of Levitt and Martinsson, allowing us to approximate $f(A)$ by recursively performing low-rank updates with block rational Krylov subspaces while keeping the size of the matrices involved in the rational Krylov subspaces small. In particular, no large-scale linear system needs to be solved, yielding favorable complexity estimates and reduced execution times compared to existing methods.Finally, we develop a memory-efficient algorithm for computing $f(A)C$, where $A$ is a Hermitian matrix (not necessarily HSS), and $C$ is a block vector. Our approach combines a block Lanczos algorithm with a basis compression technique based on block rational Krylov subspaces involving only small matrices, enabling us to avoid storing the entire Lanczos basis, resulting in significant reductions in memory usage. Theoretical results demonstrate that, for a wide variety of functions, the proposed algorithm differs from block Lanczos by an error term that is typically negligible. File in questo prodotto: File Dimensione Formato accesso aperto Dimensione 734.18 kB 734.18 kB Adobe PDF Visualizza/Apri Formato Adobe PDF I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione. Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/161361 Il codice NBN di questa tesi è URN:NBN:IT:SNS-161361
{"url":"https://tesidottorato.depositolegale.it/handle/20.500.14242/161361","timestamp":"2024-11-12T04:06:40Z","content_type":"text/html","content_length":"44775","record_id":"<urn:uuid:4e1e8985-8070-4955-8a47-5568f3d44083>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00544.warc.gz"}
Existence Results for Optimal Control Problems with Some Special Nonlinear Dependence on State and Control Pedregal, Pablo; Tiago, Jorge SIAM Journal on Control and Optimization, 48(2) (2009), 415-437 We present a general approach to prove existence of solutions for optimal control problems not based on typical convexity conditions, which quite often are very hard, if not impossible, to check. By taking advantage of several relaxations of the problem, we isolate an assumption which guarantees the existence of solutions of the original optimal control problem. To show the validity of this crucial hypothesis through various means and in various contexts is the main goal of this paper. In each such situation, we end up with some existence result. In particular, we would like to stress a general result that takes advantage of the particular structure of both the cost functional and the state equation. One main motivation for our work here comes from a model for guidance and control of ocean vehicles. Some explicit existence results and comparison examples are given.
{"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=6&member_id=89&doc_id=1747","timestamp":"2024-11-13T18:06:29Z","content_type":"text/html","content_length":"8937","record_id":"<urn:uuid:7a4b5df0-47b1-4d51-906b-5a7d9871fb42>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00713.warc.gz"}
The Perspective and Orthographic Projection Matrix The Perspective Projection Matrix Reading time: 22 mins. The OpenGL Perspective Projection Matrix In all OpenGL books and references, the perspective projection matrix used in OpenGL is defined as: $$ \left[\begin{array}{cccc} \frac{2n}{r-l} & 0 & \frac{r + l}{r-l} & 0 \\ 0 & \frac{2n}{t-b} & \frac{t + b}{t-b} & 0 \\ 0 & 0 & -\frac{f+n}{f-n} & -\frac{2fn}{f-n}\\ 0 & 0 & -1 & 0\\ \end{array}\ right] $$ What similarities does this matrix have with the matrix we studied in the previous chapter? It is important to remember that matrices in OpenGL are defined using a column-major order, as opposed to row-major order. In the lesson on Geometry, we explained that to transition from one order to the other, one can simply transpose the matrix. If we transpose the above matrix, we get: $$ \left[\begin{array}{cccc} \frac{2n}{r-l} & 0 & 0 & 0 \\ 0 & \frac{2n}{t-b} & 0 & 0 \\ \frac{r + l}{r-l} & \frac{t + b}{t-b} & -\frac{f+n}{f-n} & {\color{red}{-1}}\\ 0 & 0 & -\frac{2fn}{f-n} & 0\\ \end{array}\right] $$ This is the matrix we would use on Scratchapixel, as we use row vectors. However, in OpenGL, you would use the first matrix, as OpenGL uses column vectors by default, though this can be changed in OpenGL 4.x and modern real-time 3D graphics APIs such as Vulkan. Pay attention to the element in red (third row and fourth column). When we multiply a homogeneous point with this matrix, the point's \(w\) coordinate is multiplied by this element, and the value of \(w\) ends up being the projected point's \(z\) coordinate: $$ \left[\begin{array}{cccc}x' & y' & z' & w'\end{array}\right] = \left[\begin{array}{cccc}x & y & z & w = 1\end{array}\right] * \left[\begin{array}{cccc} \frac{2n}{r-l} & 0 & 0 & 0 \\ 0 & \frac{2n} {t-b} & 0 & 0 \\ \frac{r + l}{r-l} & \frac{t + b}{t-b} & -\frac{f+n}{f-n} & {\color{red}{-1}}\\ 0 & 0 & -\frac{2fn}{f-n} & 0\\ \end{array}\right] $$ $$P'_w = 0 \cdot P_x + 0 \cdot P_y - 1 \cdot P_z + 0 = -P_z.$$ Our mathematical expressions and equations are accurate, reflecting the correct formulas for the perspective projection matrix as used in OpenGL and its transformation upon transposition. In summary, we understand that the matrix is correctly set up for the z-divide. Let's now examine how points are projected in OpenGL (Vulkan, Meta, Direct3D or WebGL). The principle remains the same as discussed in the previous chapter. A line is drawn from the camera's origin to the point \(P\) that we want to project, and the intersection of this line with the image plane determines the position of the projected point \(P_s\). While the setup mirrors that shown in figure 1 from the previous chapter, it's important to note that in OpenGL, the image plane is situated on the near clipping plane, as opposed to being precisely one unit away from the camera's origin. Figure 1: We use the property of similar triangles to find the position of \(P_s\). The technique of using similar triangles, as employed in chapter 1, is applicable here as well. The triangles \(\Delta ABC\) and \(\Delta DEF\) are similar. Thus, we can express: $$\frac{AB}{DE} = \frac{BC}{EF}.$$ By substituting \(AB\) with \(n\) (the near clipping plane), \(DE\) with \(P_z\) (the z-coordinate of \(P\)), and \(EF\) with \(P_y\) (the y-coordinate of \(P\)), we can rewrite this equation as (equation 1): $$\frac{n}{-P_z} = \frac{BC}{P_y} \rightarrow BC = P_s{}_y = \frac{n \cdot P_y}{-P_z}.$$ As observed, the only difference from the equation in the previous chapter is the inclusion of \(n\) in the numerator. However, the principle of division by \(P_z\) remains unchanged (noting that since the camera is oriented along the negative z-axis, \(P_z\) is negative: \(P_z < 0\)). To maintain the y-coordinate of the projected point as positive, given that \(P_y\) is positive, we negate \ (P_z\). Following the same logic, we derive the x-coordinate of the projected point with the following equation (equation 2): $$P_s{}_x = \frac{n \cdot P_x}{-P_z}.$$ Figure 2: The frustum or viewing volume of a camera is defined by the camera's field of view, the near and far clipping planes, and the image aspect ratio. In OpenGL, points are projected onto the front face of the frustum (the near clipping plane). Having determined the values for \(P_s{}_x\) and \(P_s{}_y\), we now need to elucidate how they correlate with the OpenGL perspective matrix. The purpose of a projection matrix is to remap the values projected onto the image plane to a unit cube (defined by minimum and maximum extents of \((-1,-1,-1)\) and \((1,1,1)\), respectively). Once the point \(P\) is projected onto the image plane, \(P_s\) is considered visible if its \(x\) and \(y\) coordinates fall within the range \([left, right]\) for \(x\) and \([bottom, top]\) for \(y\), as depicted in Figure 2. While we have previously discussed in the lesson 3D Viewing: the Pinhole Camera Model how the \(left\), \(right\), \(bottom\), and \(top\) coordinates are calculated, we will revisit this explanation in this chapter. These screen coordinates set the limits or boundaries on the image plane for visible points (all points contained in the viewing frustum and projected onto the image plane). Assuming \(P_s{}_x\) is visible, it can be expressed as: $$l \leq P_s{}_x \leq r,$$ where \(l\) and \(r\) represent the left and right coordinates, respectively. Our objective is to remap \(P_s{}_x\) so that its final value resides within the range \([-1,1]\) (the dimensions of the unit cube along the \(x\)-axis). Reiterating the equations introduced in the previous lesson, let's start by subtracting \(l\) from all terms to rewrite the equation as: $$0 \leq P_s{}_x - l \leq r - l.$$ Normalizing the term on the right by dividing all terms of this formula by \(r-l\) yields: $$0 \leq \frac{P_s{}_x - l}{r-l} \leq 1.$$ Multiplying all terms by 2 gives: $$0 \leq 2\frac{P_s{}_x - l}{r-l} \leq 2.$$ Subtracting 1 from all terms results in: $$-1 \leq 2\frac{P_s{}_x - l}{r-l} - 1 \leq 1.$$ This remaps the central term to the range \([-1,1]\), which was our goal, though the terms can be further rearranged: $$-1 \leq 2 \frac{P_s{}_x - l}{r-l} - \frac{r-l}{r-l} \leq 1.$$ Developing this, we obtain: $$-1 \leq \frac{2P_s{}_x - 2l - r + l}{r-l} \leq 1.$$ $$-1 \leq \frac{2P_s{}_x - l - r}{r-l} \leq 1 \rightarrow -1 \leq \frac{2P_s{}_x}{r-l} - \frac{r + l}{r - l} \leq 1.$$ These two terms are quite similar to the first two terms of the first row in the OpenGL perspective projection matrix. We are getting closer. If we replace \(Ps_x\) from the previous equation with equation 2, we get: $$-1 \leq \dfrac{2n P_x}{-P_z(r-l)} - \dfrac{r + l}{r - l} \leq 1.$$ We can easily encode this equation in matrix form. If we replace the first and third coefficients of the matrix's first row with the first and second term of this formula, here is what we get: $$ \left[\begin{array}{cccc} \dfrac{2n}{r-l} & 0 & \dfrac{r + l}{r-l} & 0 \\ \ldots & \ldots & \ldots & \ldots \\ \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & -1 & 0\\ \end{array}\right] $$ Remember, the OpenGL matrix uses column-major ordering, therefore, we will have to place the multiplication sign to the right of the matrix and the point coordinates in a column vector: $$ \left[\begin{array}{cccc} \dfrac{2n}{r-l} & 0 & \dfrac{r + l}{r-l} & 0 \\ \ldots & \ldots & \ldots & \ldots \\ \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & -1 & 0\\ \end{array}\right] * \left[ \ begin{array}{c}x \\ y \\ z \\ w\end{array}\right] $$ Computing \(Ps_x\) using this matrix yields: $$Ps_x = \dfrac{2n}{r-l} P_x + 0 \cdot P_y + \dfrac{r + l}{r-l} \cdot P_z + 0 \cdot P_w.$$ You should be familiar with the concept of matrix-vector multiplication at this point, as well as the concept of row versus column-major vectors and matrices. In this particular example, we use column-major vector notation (that's the convention used by OpenGL, not by Scratchapixel—we prefer the row-major notation) to compute the transformed coordinate of the first coordinate (x) you need to use the coefficients of the matrix's first row and the vector's coordinates in the following way: $$Px_{transform} = M_{00} \cdot Px + M_{01} \cdot Py + M_{02} \cdot Pz + M_{03} \cdot Pw.$$ If you are not familiar with these concepts, read the lesson on Geometry. And since \(Ps_x\) will be divided at the end of the process by \(-P_z\) when we convert \(Ps\) from homogeneous to Cartesian coordinates, we get: $$ Ps_x = \frac{\frac{2n}{r-l} P_x}{-P_z} + \frac{\frac{r + l}{r-l} P_z}{-P_z} \rightarrow \frac{2n P_x}{-P_z(r-l)} - \frac{r + l}{r-l}. $$ This is the first coordinate of the projected point \(Ps\) computed using the OpenGL perspective matrix. The derivation is quite lengthy, and we will skip it for \(Ps_y\). However, if you follow the steps we used for \(Ps_x\), doing it yourself shouldn't be a problem. You just need to replace \(l\) and \(r\) with \(b\) and \(t\), and you end up with the following formula: $$-1 \leq \frac{2n P_y}{-P_z(t-b)} - \frac{t + b}{t - b} \leq 1.$$ We can achieve this result with point-matrix multiplication if we replace the second and third coefficients of the matrix's second row with the first and second terms of this equation: $$ \left[\begin{array}{cccc} \frac{2n}{r-l} & 0 & \frac{r + l}{r-l} & 0 \\ 0 & \frac{2n}{t-b} & \frac{t + b}{t-b} & 0 \\ \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & -1 & 0\\ \end{array}\right] $$ Computing \(Ps_y\) using this matrix gives: $$Ps_y = 0 \cdot P_x + \frac{2n}{t-b} \cdot P_y + \frac{t + b}{t-b} \cdot P_z + 0 \cdot P_w$$ and after the division by \(-P_z\): $$Ps_y = \frac{\frac{2n}{t-b} P_y}{-P_z} + \frac{\frac{t + b}{t-b} P_z}{-P_z} \rightarrow \frac{2n P_y}{-P_z(t-b)} - \frac{t + b}{t-b}$$ Our matrix works again. All that's left to do to complete it is find a way to remap the z-coordinate of the projected points to the range [-1,1]. We know that the x- and y-coordinates of \(P\) don't contribute to the calculation of the projected point's z-coordinate. Thus, the first and second coefficients of the matrix's third row, which would be multiplied by \(P\)'s x- and y-coordinates, are necessarily zero (in green). We are left with two coefficients, \(A\) and \(B\), in the matrix which are unknown (in red). $$ \left[\begin{array}{cccc} \frac{2n}{r-l} & 0 & \frac{r + l}{r-l} & 0 \\ 0 & \frac{2n}{t-b} & \frac{t + b}{t-b} & 0 \\ \color{green}{0} & \color{green}{0} & \color{red}{A} & \color{red}{B}\\ 0 & 0 & -1 & 0 \\ \end{array}\right] $$ If we write the equation to compute \(Ps_z\) using this matrix, we get (remember that \(Ps_z\) is also divided by \(Ps_w\) when the point is converted from homogeneous to Cartesian coordinates, and that \(P_w = 1\)): $$ Ps_z = \frac{0 \cdot P_x + 0 \cdot P_y + A \cdot P_z + B \cdot P_w}{Ps_w = -P_z} \rightarrow \frac{A \cdot P_z + B}{Ps_w = -P_z}. $$ We need to find the values of A and B. Fortunately, we know that when \(P_z\) is on the near clipping plane, \(Ps_z\) needs to be remapped to -1, and when \(P_z\) is on the far clipping plane, \(Ps_z \) needs to be remapped to 1. Therefore, we need to replace \(Ps_z\) with \(n\) and \(f\) in the equation to get two new equations (note that the z-coordinate of all the points projected on the image plane is negative, but \(n\) and \(f\) are positive, therefore we will use \(-n\) and \(-f\) instead): $$ \left\{ \begin{array}{ll} \frac{(P_z=-n)A + B}{(-P_z=-(-n)=n)} = -1 & \text{ when } P_z = n\\ \frac{(P_z=-f)A + B}{(P_z=-(-f)=f)} = 1 & \text{ when } P_z = f \end{array} \right. \\ \rightarrow \ left\{ \begin{array}{ll} {-nA + B} = -n & (1)\\ {-fA + B} = f & (2) \end{array} \right. $$ Let's solve for B in equation 1: $$B = -n + An.$$ And substitute B in equation 2 with this equation: $$-fA - n + An = f.$$ Then solve for A: $$-fA + An = f + n \rightarrow -(f - n)A = f + n \rightarrow A = -\frac{f + n}{f - n}.$$ Now that we have a solution for A, finding B is straightforward. We just replace A in equation 1 to find B: $$B = -n + An = -n -\frac{f + n}{f - n} n = \\-(1+\frac{f + n}{f - n}) n = - \frac{{(f -n + f + n)}n}{f - n}=-\frac { 2fn }{f -n}.$$ We can replace the solutions we found for A and B in our matrix, and we finally get: $$\left[\begin{array}{cccc} \frac{2n}{r-l} & 0 & \frac{r + l}{r-l} & 0 \\ 0 & \frac{2n}{t-b} & \frac{t + b}{t-b} & 0 \\ 0 & 0 & -\frac{f+n}{f-n} & -\frac{2fn}{f-n}\\ 0 & 0 & -1 & 0\\ \end{array}\ which is the OpenGL perspective projection matrix. Should Z be in Range \([-1,1]\) or \([0,1]\)? This Sadly Matters Whether Z should be in the range \([-1,1]\) or \([0,1]\) is a valid question. So far, we've chosen to remap \(z\) to the range \([-1,1]\) and have provided the equations for that. Most perspective matrix code remaps Z to that range. However, many (if not most) real-time graphics APIs, such as OpenGL and Vulkan, expect the depth range to be within \([0,1]\). Technically, these graphics APIs allow you to define this range more or less as you like when you create your graphics pipeline. In Vulkan, you set minDepth and maxDepth in the VkPipelineViewportStateCreateInfo structure (which itself has a pViewports parameter where you set these depth bounds). In OpenGL, these values are set through glDepthRange. Now, you can achieve this in two ways. You can modify the original perspective projection matrix so that the Z-coordinate directly remaps to \([0,1]\). $$ \left\{ \begin{array}{ll} \frac{(P_z = -n)A + B}{(-P_z = -(-n) = n)} = 0 & \text{ when } P_z = n \\ \frac{(P_z = -f)A + B}{(P_z = -(-f) = f)} = 1 & \text{ when } P_z = f \end{array} \right. $$ $$ \rightarrow \left\{ \begin{array}{ll} {-nA + B} = 0 & (1) \\ {-fA + B} = f & (2) \end{array} \right. $$ From which we can derive: $$ A = -\frac{f}{f-n} $$ $$ B = -\frac{fn}{f-n} $$ Another solution simply consists of using the original matrix and letting the real-time graphics API remap these values for you. This is possible in OpenGL because when points are transformed from NDC space to window space (or raster space, if you prefer), the \(z\) is remapped as follows: $$ \begin{pmatrix} x_w \\ y_w \\ z_w \\ \end{pmatrix} = \begin{pmatrix} \frac{P_x}{2} x_d + o_x \\ \frac{P_y}{2} y_d + o_y \\ \frac{f \cdot n}{2} z_d + \frac{n + f}{2} \end{pmatrix} $$ Don't worry too much about \(x_w\) and \(y_w\) here. We are only interested in \(z_w\) (by the way, \(w\) stands for window space here). In OpenGL, at this particular stage, the variables \(n\) and \ (f\) are set to the minDepth and maxDepth values that we spoke about earlier. So if those are 0 and 1 respectively, this is similar to \(\frac{1}{2} z_d + \frac{1}{2}\), which effectively remaps points in the range \([-1,1]\) to the range \([0,1]\). Things get a little more complicated in APIs such as Vulkan. XX Depth Buffer Precision Issues Figure 3: The remapping of the projected point's z coordinate is nonlinear. This graph shows the result of \(\scriptsize Ps_z\) for near = 1 and far = 5. The remapping of the z-coordinate uniquely prioritizes points closer to the camera with greater numerical precision compared to points further away. As discussed in the previous chapter, this characteristic can lead to issues where the lack of numerical precision results in adjacent samples receiving identical depth values after projection onto the screen, despite having distinct z-coordinates in world space. This phenomenon, known as z-fighting, poses a challenge. Although the problem cannot be entirely eliminated—given the inherent limitations of precision in single-precision floating-point numbers—it can be mitigated by carefully adjusting the near and far clipping planes to align as closely as possible with the nearest and furthest objects visible in the scene. This rationale underlines the importance of precise clipping plane adjustments. The Field of View and Image Aspect Ratio You may have noticed that, so far, we haven't made any reference to the camera's field of view (FOV) and image aspect ratio. However, as mentioned in the previous chapter and the lesson on cameras (in the basic section), changing the FOV alters the extent of the scene viewed through the camera. Thus, the field of view and the image aspect ratio are somehow related to the projection process. We deliberately ignored this detail until now to stay focused on the OpenGL perspective projection matrix, which doesn't directly depend on the camera's field of view, but it does so indirectly. The construction of the matrix relies on six parameters: the left, right, bottom, and top coordinates, as well as the near and far clipping planes. The user provides the values for the near and far clipping planes, but how about the left, right, bottom, and top coordinates? What are these, where do they come from, and how do we calculate them? Observing Figures 2 and 5, you can see that these coordinates correspond to the lower-left and upper-right corners of the frustum front face, where the image of the 3D scene is projected. Computing the Coordinates Figure 4: Side view of the camera. The triangle ACD's apex defines the camera's vertical field of view (FOV). The image plane location is determined by the near-clipping plane distance. Using simple trigonometry, the top coordinate can be computed from these two values (the FOV and the near clipping plane). To compute the top coordinate, we look at the right-angled triangle ABC. The angle subtended by AB and AC is half the FOV, and the adjacent side of the triangle is the value for the near-clipping plane. Using trigonometry, we can express this as: $$\tan\left( \frac{ FOVY } {2}\right) = \frac{ opposite } { adjacent } = \frac {BC}{AB} = \frac{top}{near}$$ $$ top = \tan\left( \frac{ FOVY } {2}\right) * near $$ And since the bottom half of the camera is symmetrical to the upper half, we can state that: $$bottom = -top$$ The angle of view can either be defined vertically or horizontally. OpenGL tends to define the field-of-view as vertical (hence the Y in FOVY), but on Scratchapixel, we use a horizontal angle-of-view, similar to Maya and RenderMan. Figure 5: The image can be square (left) or rectangular (right). Note that the bottom-left coordinates and the top-right coordinates are symmetric about the x- and y-axis. In Figure 5, two scenarios are considered: the image can either be square or rectangular. For a square camera, it's straightforward: the left and bottom coordinates are the same, the right and top coordinates are also the same, and mirroring the bottom-left coordinates around the x- and y-axis gives the top-right coordinates. Therefore, if we compute the top coordinates, we can easily set the other three: $$ \begin{array}{l} top = \tan( \frac{ FOV } {2}) * near \\ right = top \\ left = bottom = -top \end{array} $$ For a non-square camera, as shown in the right inside of figure 5, computing the coordinates becomes slightly more complicated. The bottom and top coordinates remain the same, but the left and right coordinates are scaled by the aspect ratio, defined as the image width over the image height. The general formulas for computing the left, right, and bottom coordinates are: $$ \begin{array}{l} aspect\;ratio = \frac{width}{height}\\ top = \tan( \frac{ FOV } {2}) * near \\ bottom = -top \\ right = top * aspect\;ratio\\ left = bottom = -top * aspect\;ratio \end{array} $$ Thus, the camera's field of view and image aspect ratio are crucial in calculating the left, right, bottom, and top coordinates, which in turn are used in constructing the perspective projection matrix. This is how they indirectly influence how much of the scene is visible through the camera. Test Program To test the OpenGL perspective projection matrix, we will reuse the code from the previous chapter. In the old fixed-function rendering pipeline, two functions, gluPerspective (part of the GLU library) and glFrustum, were utilized to set the screen coordinates and the projection matrix. These functions are deprecated (since OpenGL 3.1) in the new programmable rendering pipeline, but we use them in this lesson to demonstrate their implementation based on what we have learned in this chapter. You can still emulate them in your CPU program if desired. Setting up the perspective projection matrix in OpenGL was achieved through a call to glFrustum. This function accepted six arguments: glFrustum(float left, float right, float bottom, float top, float near, float far); The implementation of this function is shown below (line 20). The function gluPerspective was used to set the screen coordinates, taking as arguments the angle of view, the image aspect ratio (image width over image height), and the clipping planes. void gluPerspective(float fovy, float aspect, float zNear, float zFar); In OpenGL, the angle of view is defined as the vertical angle (hence the 'y' in the variable name). On Scratchapixel, we use the horizontal angle of view. An implementation of this function is provided below (line 8). The rest of the code remains unchanged. We first compute the screen coordinates, then the projection matrix. Next, we iterate over all the vertices of the teapot geometry, transform them from object/world space to camera space, and finally project them onto the screen using the perspective projection matrix. Remember, the matrix remaps the projected point to NDC space. Thus, as in the previous version of the code, visible points fall within the range [-1,1] in height and [-imageAspectRatio, imageAspectRatio] (or [-1,1] if the image is square) in width. #include &ltcstdio&gt #include &ltcstdlib&gt #include &ltfstream&gt #include "geometry.h" #include "vertexdata.h" // Compute screen coordinates first void gluPerspective( const float &angleOfView, const float &imageAspectRatio, const float &n, const float &f, float &b, float &t, float &l, float &r) float scale = tan(angleOfView * 0.5 * M_PI / 180) * n; r = imageAspectRatio * scale, l = -r; t = scale, b = -t; // Set the OpenGL perspective projection matrix void glFrustum( const float &b, const float &t, const float &l, const float &r, const float &n, const float &f, Matrix44f &M) // Set OpenGL perspective projection matrix M[0][0] = 2 * n / (r - l); M[0][1] = 0; M[0][2] = 0; M[0][3] = 0; M[1][0] = 0; M[1][1] = 2 * n / (t - b); M[1][2] = 0; M[1][3] = 0; M[2][0] = (r + l) / (r - l); M[2][1] = (t + b) / (t - b); M[2][2] = -(f + n) / (f - n); M[2][3] = -1; M[3][0] = 0; M[3][1] = 0; M[3][2] = -2 * f * n / (f - n); M[3][3] = 0; void multPointMatrix(const Vec3f &in, Vec3f &out, the Matrix44f &M) // out = in * Mproj; out.x = in.x * M[0][0] + in.y * M[1][0] + in.z * M[2][0] + M[3][0]; // in.z = 1 assumed out.y = in.x * M[0][1] + in.y * M[1][1] + in.z * M[2][1] + M[3][1]; out.z = in.x * M[0][2] + in.y * M[1][2] + in.z * M[2][2] + M[3][2]; float w = in.x * M[0][3] + in.y * M[1][3] + in.z * M[2][3] + M[3][3]; // Normalize if w is different than 1 (convert from homogeneous to Cartesian coordinates) if (w != 1) { out.x /= w; out.y /= w; out.z /= w; int main(int argc, char **argv) uint32_t imageWidth = 512, imageHeight = 512; Matrix44f Mproj; Matrix44f worldToCamera; worldToCamera[3][1] = -10; worldToCamera[3][2] = -20; float angleOfView = 90; float near = 0.1; float far = 100; float imageAspectRatio = imageWidth / (float)imageHeight; float b, t, l, r; gluPerspective(angleOfView, imageAspectRatio, near, far, b, t, l, r); glFrustum(b, t, l, r, near, far, Mproj); unsigned char *buffer = new unsigned char[imageWidth * imageHeight]; memset(buffer, 0x0, imageWidth * imageHeight); for (uint32_t i = 0; i &lt numVertices; ++i) { Vec3f vertCamera, projectedVert; multPointMatrix(vertices[i], vertCamera, worldToCamera); multPointMatrix(vertCamera, projectedVert, Mproj); if (projectedVert.x &lt -imageAspectRatio || projectedVert.x &gt imageAspectRatio || projectedVert.y &lt -1 || projectedVert.y &gt 1) continue; // Convert to raster space and mark the vertex position on the image with a simple dot uint32_t x = std::min(imageWidth - 1, (uint32_t)((projectedVert.x + 1) * 0.5 * imageWidth)); uint32_t y = std::min(imageHeight - 1, (uint32_t)((1 - (projectedVert.y + 1) * 0.5) * imageHeight)); buffer[y * imageWidth + x] = 255; // Export to image std::ofstream ofs; ofs &lt&lt "P5\n" &lt&lt imageWidth &lt&lt " " &lt&lt imageHeight &lt&lt "\n255\n"; ofs.write((char*)buffer, imageWidth * imageHeight); delete [] buffer; return 0; We noted in the first chapter that even if matrices are constructed differently (and appear different), they should always yield the same result: a point in 3D space should be projected to the same pixel location on the image. Comparing the results of projecting the teapot's vertices using the first matrix with those using the same camera settings (same field of view, image aspect ratio, near and far clipping planes) and the OpenGL perspective projection matrix produces identical images (see image below). The source code of this program is available on Scratchapixel's GitHub repo.
{"url":"https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/opengl-perspective-projection-matrix.html","timestamp":"2024-11-08T07:23:58Z","content_type":"text/html","content_length":"33402","record_id":"<urn:uuid:3a807385-05cd-42d1-a787-9e9b26889694>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00261.warc.gz"}
Truth Table Sign up for free You have reached the daily AI limit Start learning or create your own AI flashcards Review generated flashcards Unpacking the OR Gate Definition in Computer Organisation and Architecture An essential part of understanding computer organisation and architecture is examining how different types of logic gates, such as the OR Gate, factor into the functionality of computing systems. The field of computer science involves numerous concepts, but none are so integral as the understanding of these fundamental building blocks. History and Explanation of OR Gate In terms of logic gates, the OR Gate is a basic digital logic gate that implements logical disjunction—it behaves according to the truth table to the right: The origin of the term 'OR Gate' can be traced back to Boolean logic, conceptualised by mathematician George Boole in the mid-19th century. Boolean logic serves as the basis for modern digital computer logic design. INPUT A INPUT B OUTPUT Components That Form an OR Gate An OR Gate can be physically created using various components, amongst the most common are: • Transistors: This approach utilises semiconductors to open or close circuits, hence, manipulating the output. • Relays: The use of electromagnetic elements for controlling the output. • Diodes: These components allow current to flow in only one direction, assisting in the gate’s functionality. How Does an OR Gate Work? If you imagine an OR Gate as a room with two entrances, then a person can enter the room if Door A is open, Door B is open, or both doors are open. Similarly, in terms of logic, the OR Gate produces an output (1/True) if either of its inputs is a 1/True or if both inputs are 1/True. If neither input is 1/True, then the gate's output is 0/False. This logical behaviour can be mathematically represented by \[ Y = A + B \], where \( Y \) is the output and \( A \) and \( B \) are inputs. def or_gate(A, B): return A or B Considering the complexity of today's digital systems, understanding the basic principles of these foundational components is absolutely pivotal in the world of computer science. From simplistic logic operations to intricate circuitry designs, OR Gates play a crucial part in shaping the digital world. Delving into OR Gate Properties and Characteristics When exploring the characteristics and properties of an OR Gate, one must be conscious of its foundational functionality within the realm of digital and binary logic. These characteristics essentially dictate how the gate operates, interacts with other elements in a circuit, and provides the expected output. Fundamental Properties of an OR Gate By nature, an OR Gate adheres to three primary properties considered fundamental to their operation: • Commutative Law: This law entails that the order in which the binary inputs are presented does not affect the resultant output. For example, given two inputs A and B, \( A + B = B + A \). • Associative Law: In the context of multiple OR Gates, the configuration of the brackets doesn't affect the output. An illustration would be \( (A + B) + C = A + (B + C) \). • Idempotent Law: This law implies that the duplication of inputs doesn't modify the output. Therefore, \( A + A = A \). These laws anchor the basic functionality of an OR Gate, which forms a cardinal part of the comprehensive command of digital logic, a crucial phenomenon in computer science. Let's consider a simple example of how certain properties of an OR Gate operate from a computational perspective: def or_gate(A, B): return A or B # Let's test the commutative law assert(or_gate(1, 0) == or_gate(0, 1)) # Let's test the associative law assert(or_gate(or_gate(1, 0), 0) == or_gate(1, or_gate(0, 0))) # Let's test the idempotent law assert(or_gate(1, 1) == 1) Understanding the OR Gate Properties Through Binary Logic A practical approach to grasp the characteristics of an OR Gate centres on binary logic. Observing how OR Gates behave with binary inputs - either 0 or 1 - allows a deeper understanding of the logic and functionality behind these gates. Here are some key elements to take note of: • Logic High: An OR Gate can produce a 'logic high' (1) if at least one input is high. • Logic Low: On the other extreme, an OR Gate returns 'logic low' (0) only when all inputs are low. • Duality: Intriguingly, one can form an AND Gate by interchanging 1s and 0s in the OR Gate's truth table. Now, let's analyse a binary example: Suppose we take the binary inputs 1101 and 1011. In a 4-input OR Gate, we perform the OR operation on each corresponding pair of bits: This example epitomises how OR Gates operate for an array of binary inputs and further reinforces the understanding of the utilisation of these gates within the broader ambit of computer science and digital logic. Mastering the OR Gate Truth Table Deeply rooted in the principles of Boolean algebra, the OR Gate Truth Table is a handy tool when deciphering the output of an OR Gate based on its inputs. Let's take a closer look at its components, how to read and interpret it, and some real-life examples to make the concept crystal clear. Components of the OR Gate Truth Table An OR Gate can have any number of inputs but always has one output. The simplest form of an OR Gate - the 2-input OR Gate - has a truth table that consists of two input columns (A and B), and one output column (Y). Each row of this table represents a possible combination of inputs and the resultant output. Here are the four possible scenarios: A B Y It is important to note that this truth table displays the principle of the digital circuit called the OR Gate. As the number of inputs to the OR Gate increases, the number of rows in the truth table doubles with each additional input. Reading and Interpreting the OR Gate Truth Table The OR Gate Truth Table is fairly straightforward to read and interpret. The '0' and '1' numbers within the table are binary representations, which are used to depict the logic states of the gate. A '0' represents a logic low state or false, and '1' represents a logic high state or true. The OR Gate will output a '1' when any input or a combination of inputs is at a high state. If all inputs are '0', the output will also be '0'. The behaviour follows the fundamental formula of Boolean algebra for OR operation \( Y = A + B \), where \( Y \) depicts the output and \( A, B \) illustrate the # Python Code A = [0, 0, 1, 1] B = [0, 1, 0, 1] # Outputs [0, 1, 1, 1] Y = [a or b for a, b in zip(A, B)] The Python code listed above mirrors the OR Gate's operation. It follows the same order of inputs as our truth table and compares them using the logical OR operator. This code and its output clearly elucidate how to read and interpret the OR Gate Truth Table effectively. Real-life Examples to Understand the OR Gate Truth Table Understanding the OR Gate Truth Table and its principles brings you closer to fully comprehending the logic behind many everyday digital devices and operations. For instance: The operation of a burglar alarm is highly comparable to an OR Gate. If either one or all of the windows are open (input states of '1'), the alarm will sound (output state of '1'). The alarm will only stay silent when all windows are closed (input state of '0'). This functionality closely echoes the OR Gate Truth Table's logic. In the digital realm, the search function in databases or web pages is another real-life implementation of the OR Gate. If you search for "apples OR oranges," the search engine will return all entries containing either "apples," "oranges," or both. Consider a light system controlled by two switches - each switch can independently turn the light on. This is an OR operation similar to the OR Gate Truth Table. If either of the switches or both are turned on (input state '1'), the light turns on (output state '1'). The light only goes off when both the switches are off (input state '0'). As these examples demonstrate, the concepts underpinning the OR Gate Truth Table permeate all digital systems around us. The keys unlock the functioning of vast arrays of digital and real-world applications and elevate your understanding of the digital world. Diversity of OR Gate Applications in Computer Science OR Gates, beyond their basic role in digital logic circuits, stretch out far and wide into various applications within the realm of computer science. Their presence is strongly felt in aspects ranging from binary logic systems to real-life applications that we may encounter daily. Delving into these applications provides a profound understanding of the role and pivotal nature of OR Gates in shaping the digital world. OR Gate Applications in Different Binary Logic Systems The application of OR Gates in binary logic systems primarily pertains to the manipulation and processing of binary numbers, which are routinely used in computing. Understanding this sphere of OR Gate application is crucial in computer science as it forms the foundation for various computing processes and digital systems. Adding Binary Numbers: Perhaps the most frequent occurrence of OR Gates in binary logic systems is in the process of addition of binary numbers. In a binary addition circuit or a binary adder, OR Gates are crucial components that drive the logic. An OR Gate is used to perform what is known as the half-adder operation as a part of binary addition. When adding binary numbers, we generally add each pair of corresponding bits in the two numbers. As a result of this addition, we either get a sum (S) or a carry (C). Inasmuch as there are only four possible combinations of input pairs, i.e., (0,0), (0,1), (1,0), and (1,1), we resort to an OR Gate to calculate the sum. For example: def binary_addition(A, B): S = A ^ B C = A & B return (S, C) # Test the function assert(binary_addition(0, 1) == (1, 0)) assert(binary_addition(1, 1) == (0, 1)) assert(binary_addition(0, 0) == (0, 0)) assert(binary_addition(1, 0) == (1, 0)) Binary Encoders and Decoders: OR Gates play a principal role in the formation and functioning of binary encoders and decoders - devices that are extensively applied in various aspects of digital systems, especially in memory and data transmission. An encoder is a circuit that transforms a specific binary input into a uniquely associated binary output. Conversely, a decoder reverses this operation, converting the binary output back into the original binary input. OR Gates usually facilitate this conversion process at the encoder and decoder stages. Consider an 8-to-3 binary encoder, which means it takes 8 inputs and produces a 3-bit binary code. When any one of the 8 inputs is high (1), we get a specific 3-bit binary number at the output. This output number is usually the binary equivalent of the decimal number of the high input. We carry out this encoding with the help of OR Gates. Daily Life Instances of OR Gate Applications Outside the computer science field, OR Gates are implemented broadly in numerous practical aspects of our daily life. A fascinating aspect of understanding OR Gate applications is the perception of how theoretical digital logic impacts our real-world lives. Alarm Systems: One of the most tangible manifestations of OR Gate application can be seen in various types of alarm systems. Essentially, an alarm system operates on the principle of an OR Gate, where there are multiple sensors or triggers (inputs), and if any of them or all of them are tripped (1), the alarm activates (1). For example, a home security system has several sensors installed on doors and windows. If any door or window is opened (input is 1), the alarm is triggered (output is 1). If all doors and windows are securely closed (all inputs are 0), the alarm stays silent (output is 0). Electronic Switch Systems: Another common application of OR Gates in our daily life is seen in electronic switch systems. In these systems, an OR Gate facilitates the control of a device from multiple points. An illustration of this is a room with a lamp that can be switched on from two points - let's say, two entrances. With this arrangement, the lamp can be turned on from any entrance, or both, similar to the output of an OR Gate. Search Engines: Even when surfing the web, you're constantly benefiting from OR Gate logic. When you use multiple keywords in your search query separated by 'OR', the search engine will return all entries that contain at least one of your keywords. A search query for "apples OR oranges" in a digital database yields results which contain "apples", "oranges", or both. The 'OR' operator extends the search to show more results, embodying the principle of an OR Gate. Through these examples, you can get a glimpse into the profusion of OR Gate applications both in computer science and everyday life. Understanding these applications helps make sense of theoretical principles and underscores the relevance and influence of OR Gates in shaping our digital lives. Exclusive OR Gate and Its Role in Computer Architecture In computer architecture, the role of the Exclusive OR Gate, often referred to as XOR Gate, is significant. XOR Gate is a type of logic gate that outputs true or '1' only when the number of true inputs is odd. An XOR Gate behaves like an OR Gate but with a key difference which we'll explore in further sections. Explanation of the Exclusive OR Gate Concept The Exclusive OR Gate, or XOR Gate as it's commonly known, is a digital logic gate that takes two inputs and returns a high output (1) only when exactly one of its inputs is high. If both inputs are low (0) or both are high (1), the output is low (0). This behaviour makes the XOR Gate unique and plays a crucial role in certain types of computational operations, notably in binary addition and subtraction. Mathematically, the operation performed by an XOR Gate can be represented with the formula \( Y = A \oplus B \), where \( \oplus \) signifies the XOR operation and the variables \( Y, A, \) and \( B \) are the output and inputs respectively. The truth table of an XOR Gate looks like this: A B Y When used in complex digital circuits or computing systems, XOR Gates often ensure data integrity by making certain that data conforms to certain criteria. The Difference Between an Exclusive OR Gate and an OR gate At the surface, an XOR Gate might resemble an OR Gate in certain aspects. However, an essential difference lies in their output behaviour when both inputs are high. An OR Gate will give a high output (1) when either or both of its inputs are high. Whereas, an XOR Gate will return a low output (0) when both of its inputs are high. This exclusive behaviour of an XOR Gate is what gives it its name and sets it apart from an OR Gate. So, to summarise, for an OR Gate its output \( Y = A + B \), while for an XOR Gate it's \( Y = A \oplus B \) which means \( A' \cdot B + A \cdot B' \), where \( A' \) and \( B' \) are the inverses of \( A \) and \( B \) respectively. Examples of Exclusive OR Gate Uses In computer architecture and electronics, XOR Gates are typically used in several ways: • Arithmetics: XOR Gates are crucial components in adders and subtractors. The XOR Gate's unique ability to output high only when the inputs are different enables it to perform binary addition • Control algorithms: XOR Gates are used in control systems to decide a course of action based on multiple input conditions. They ensure that conditions are exclusive, i.e., only one condition should be true for an action to be taken. • Error detection and correction: In computing and data transmission, XOR Gates are commonly employed in parity generation and checking for error detection and correction in data communication. Consider a simplistic 1-bit binary adder design. Two binary values, A and B, are the inputs to be added together. The XOR Gate functions to output the SUM of the binary values, and along with an AND Gate, helps derive the CARRY value to the next adder, should there be one. This very basic function proves elemental to the composition of more complex adding systems within digital computers. An example of XOR Gates in control algorithms is a railway switching system. Here, an XOR function ensures that only one path can be selected at a time. Thus, avoiding collisions by ensuring mutually exclusive conditions. The usage of XOR Gates is built on the principle of exclusivity and giving high importance to the state of both inputs. Their applications in various computing and electronic systems underline their importance in both theoretical and practical aspects of computer science. Discovering OR Gate Examples and Uses The ubiquity of OR Gates in multiple spheres reiterates their significance in our digital world. Knowing their examples and uses not only strengthens the understanding of digital logic but also highlights their role in the field of computer science and beyond. By examining their practical applications and learning to build one ourselves, the understanding of OR Gates becomes more enjoyable and relatable. Practical Examples of OR Gate Uses OR Gates, owing to their simplicity and key role in Boolean algebra, find applications in an extraordinarily wide range of areas. Their use is not limited to purely theoretical and computer science domains but extends to practical, everyday situations as well. • Security Systems: OR Gates are an integral part of home and office security systems. They function as part of the alarm-triggering mechanism. If any one (OR all) of the sensors detects a breach, the alarm goes off. This is a classic example where an OR Gate's functionality is utilised, with the sensors providing the inputs and the alarm system acting as the output. • Digital Electronics: Apart from various specialized circuits, OR Gates are also used in general purpose computing devices and digital electronics, where logic gates are needed for the system to function correctly. • Control Systems: In a typical control system, an OR Gate can be used to issue a signal when any one of the input conditions is fulfilled. For instance, in a heating system, if any of the room temperature sensors detect a temperature below the set limit, the heating system gets triggered. • Communication Systems: In digital communication systems, OR Gates are part of the elaborate circuitry that enables data transmission and reception between devices. For instance, consider a control system for automated farm irrigation. An OR Gate can be utilised here to initiate the irrigation system when any one of the following conditions stands true: 1. The moisture sensors detect insufficient moisture levels in the soil. 2. The system runs on a schedule and the set time for watering is reached. Experimenting with OR Gates: DIY Projects There is no better way to understand an OR Gate than to connect one ourselves. It's surprisingly straightforward to do it as a DIY project, using readily available components and tools. Let's embark on an exciting journey to create an OR Gate circuit! Materials Needed for DIY OR Gate Projects For this project, you will need the following materials: • Two Switches: These will represent the inputs for our OR Gate. • An LED: This will serve as the output indicator. • Resistors: One for each switch - these protect the switches from high current. Another one for the LED. • Battery: to power the circuit. • Wires: to connect the components together. • Breadboard: to aid in constructing the circuit without any need for soldering. Once you have gathered all the necessary materials, you're ready to start building your OR Gate circuit. Easy Steps to Create Your Own OR Gate Now that all the components are ready, let's start with the construction of the OR Gate. Follow these steps: 1. Connect one end of each resistor to a separate terminal of the battery. The other end of each resistor is connected to one terminal of each switch. 2. Connect the other terminal of both switches to the anode (longer leg) of the LED. This creates two separate paths for the current, representing an OR Gate. 3. Now, connect the cathode (shorter leg) of the LED to the negative terminal of the battery to complete the circuit. By following these steps, you have built an OR Gate using switches and an LED. The LED will light up when switch A or B or both are closed, representing input state '1'. If both the switches are open, representing input state '0', the LED stays off. This is an accurate physical representation of the truth table of an OR Gate. So, grab your components, roll up your sleeves, and start experimenting. Not only it is a fantastic way to learn and appreciate OR Gate applications, but it's also a fun and rewarding project, bound to boost your understanding of this intriguing sphere of computer science. Exploring OR Gate in Different Computer Systems The world of computer systems is filled with diverse applications of Boolean logic, with OR Gates being key components of such systems. An OR Gate, a fundamental logic gate in computer science, governs various aspects of computational logic and data processing in numerous computer systems. Specialised systems, such as controllers and processors, as well as general-purpose computing devices, all leverage the principles of OR Gates to effectively function and produce the desired outputs. Using OR Gate in Simplifying Boolean Expressions The primary application of an OR Gate in computer systems is its use in simplifying Boolean expressions. A Boolean expression, an elementary concept in digital electronics and computer programming, is a logical statement that can only take two values: true or false, or in digital circuit terms, logic high (1) or low (0). A Boolean expression, depending on the complexity, may contain numerous logical operators, including AND (&), OR (+), NOT (Ә), among others. The simplification of these expressions is a routine requirement in the design and analysis of digital circuits, where OR Gates are frequently utilised. For instance, consider the Boolean expression \( P = A + B \cdot C \) This expression represents a logic circuit that can be simplified using the Distributive Law of Boolean algebra to \( P = (A + B) \cdot (A + C) \). This simplification, which involves the OR operation, can be pivotal in reducing the complexity of the circuit represented by the expression, thus leading to enhanced efficiency of the circuit. # Python Code # Given binary values of A, B, and C A, B, C = 1, 0, 1 # Evaluating original Boolean expression P = A or (B and C) # Evaluating simplified Boolean expression S = (A or B) and (A or C) # Verifying equivalence of original and simplified expressions assert(P == S) In the Python code mentioned above, the original and simplified Boolean expressions have been compared and found equivalent, showcasing the usage of OR gate in simplifying Boolean expressions. The Role of OR Gate in Digital Clocks Application Digital clocks, omnipresent in our modern world, often utilise OR Gates in their internal circuitry. OR Gates play an integral part in the routing of logic signals within the digital clock circuit, enabling the correct display of time. A digital clock generally consists of multiple components such as a microcontroller, a crystal oscillator, and seven-segment displays. The microcontroller acts as the brain of the clock, processing inputs and generating outputs to drive the seven-segment displays, and the OR Gates play a crucial role in these processes. OR Gates in a digital clock circuit often come into action for selecting the correct segments of a seven-segment display. For instance, to display the number '0', all segments except the middle one should be active. To decide which segments to activate, the microcontroller takes the binary representation of '0' as input and performs OR operations on specific bits following predefined logic rules. The OR operation results dictate the segments to be activated for displaying '0'. # Python Code def display_zero(): # Binary representation of '0' for a seven-segment display zero = [1, 1, 1, 1, 1, 1, 0] # OR operation results segments = [bit or zero[i] for i, bit in enumerate(zero)] return segments # Test the function assert(display_zero() == [1, 1, 1, 1, 1, 1, 0]) The Python code provided illustrates how OR operations can be used to determine the segments to be activated for displaying '0' on a seven-segment display of a digital clock. Furthermore, OR Gates also aid in clock pulse generation, a critical aspect of digital clock operation. The clock pulse, essentially a square wave, is used to synchronise the timing of all operations in the clock circuit. An OR Gate can be a part of the oscillator circuit that produces the clock pulse, and hence, plays a substantive role in the functioning of a digital clock. From the above illustration, it is evident how frequently OR Gates are used in the realm of computer systems, be it in simplifying Boolean expressions or in practical applications like a digital clock system. This reinforces their theoretical and practical significance in the dynamics of digital systems, especially in the world of computer science. OR Gate - Key takeaways • OR Gate Truth Table: A logic tool that depicts the behavior of an OR gate in binary terms, where '0' represents a false state and '1' a true state. The gate will output a '1' for any combination of inputs in a high state, and '0' only when all inputs are '0'. • Applications of OR Gate: Used widely in the realm of computer science and real-life situations. Examples include the operation of a burglar alarm, search functions in databases or web pages, light systems controlled by multiple switches, adding binary numbers, functioning of binary encoders and decoders and alarm systems. • Exclusive OR Gate (XOR Gate): A digital logic gate that outputs true or '1' only when the number of true inputs is odd. It behaves like an OR gate but differs in output when both inputs are high, offering a low output (0) in this case. • Difference Between OR Gate and XOR Gate: An OR Gate produces a high output when either or both of its inputs are high, whereas an XOR Gate produces a low output when both inputs are high. • Uses of XOR Gate: Essential in the formation and operation of adders and subtractors. Other applications include control algorithms for decision-making based on multiple input conditions and error detection and correction in data transmission. Learn with 14 OR Gate flashcards in the free StudySmarter app We have 14,000 flashcards about Dynamic Landscapes. Sign up with Email Already have an account? Log in Frequently Asked Questions about OR Gate What is the function of an OR Gate in computer science? An OR Gate in computer science is a basic digital logic gate that outputs true or high when at least one of its inputs is true or high. It is used in creating digital logic circuits. What are the practical applications of an OR Gate in computer science? OR Gates are used in computing systems for data processing, error detection and correction code systems, decision-making processes, memory addressing and storage, and in creating complex logical functions when combined with other gates. How does an OR Gate operate in a computer system? An OR gate in a computer system operates by receiving two binary inputs. If either or both of the inputs are '1', the OR gate outputs a '1'. If both inputs are '0', it outputs a '0'. What are the primary components of an OR Gate in computer science? The primary components of an OR gate in computer science are at least two inputs, a single output, and a logic circuit that signals an output of 'true' or '1' if at least one input is 'true' or '1'. What is the logic symbol and truth table of an OR Gate in computer science? The logic symbol for an OR Gate is a curved 'D' shape or gate with inputs entering from the left and an output on the right. The truth table is: if both inputs are 0, the output is 0; if either or both inputs are 1, the output is 1. Save Article Test your knowledge with multiple choice flashcards Join the StudySmarter App and learn efficiently with millions of flashcards and more! Learn with 14 OR Gate flashcards in the free StudySmarter app Already have an account? Log in That was a fantastic start! You can do better! Sign up to create your own flashcards Access over 700 million learning materials Study more efficiently with flashcards Get better grades with AI Sign up for free Already have an account? Log in Open in our app About StudySmarter StudySmarter is a globally recognized educational technology company, offering a holistic learning platform designed for students of all ages and educational levels. Our platform provides learning support for a wide range of subjects, including STEM, Social Sciences, and Languages and also helps students to successfully master various tests and exams worldwide, such as GCSE, A Level, SAT, ACT, Abitur, and more. We offer an extensive library of learning materials, including interactive flashcards, comprehensive textbook solutions, and detailed explanations. The cutting-edge technology and tools we provide help students create their own learning materials. StudySmarter’s content is not only expert-verified but also regularly updated to ensure accuracy and relevance. Learn more
{"url":"https://www.studysmarter.co.uk/explanations/computer-science/computer-organisation-and-architecture/or-gate/","timestamp":"2024-11-10T11:30:20Z","content_type":"text/html","content_length":"455526","record_id":"<urn:uuid:83815e9c-6bd6-4dcb-9de4-da31855ae96a>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00782.warc.gz"}
Data Types Tables can store data in the following generic forms, which are supported with specific native data types as listed in the subsequent table. Binary Variable length binary data. Maximum storage size is 2 GB Boolean TRUE or FALSE, also expressed as 1 or 0 Dates The datetime type represents data that contains a calendar date (day, month, year) and time (hour, minute, second, millisecond) ranging from January 3, 0001, 00:00:00.000 to December 31, and time 9999, 23:59:59.999 Geometry is vector data that defines the shape and location of vector objects. Geometry can be Manifold-specific geometry data types for 3D and 2D geometry and also support for OGC (Open Geometry Geospatial Consortium) geometry. Manifold geometry types can mix area, line, and point objects, including multi-branched versions of such objects, and can also include in the mix objects created from curvilinear segments such as spline arcs. Numeric data can be either scalar numbers, a single number in the field, or vector numeric data, an ordered set of one, two, three, or four scalar numbers per field. Scalar numbers are simply called numbers for short. Vector values are called vectors, with the individual numbers in the vector set called vector 0, vector 1, vector 2, and vector 3. Numbers A wide range of data types support storage of scalar numbers, for signed or unsigned, floating point or integer numbers, from 8 to 64 bits per number. Vectors can be a set of one, two, three, or four numbers. All of the numbers in a vector are the same numeric data type. For example, we could have a vector that is a set of three float64 numbers, but we could not have a vector that is a set of a uint8, a float64, and an int32. Text data is sequences of characters, also called strings. Text is stored as variable length Unicode or non-Unicode text. An individual text field has 2 GB maximum storage size for a total Text of 1 GB Unicode characters or 2 GB non-Unicode characters. Text types always can store variable length text: to constrain a field to allow only a specific number of text characters, add a constraint on that field in the table's schema. Tiles Tiles save raster data as a rectangular array of numbers of specified size, such as 128 x 128, where each number in the array is an allowed Manifold numeric type, including vector numeric types such as uint8x3. UUIDs A Universally Unique IDentifier (UUID), is a 128 bit value represented by a text string of lower-case hexadecimal digits in standard form of groups of digits separated by hyphens. Each UUID is unique. Data Types The following specific data types implement the above generic forms and can be used in a table: boolean TRUE or FALSE, also expressed as 1 or 0 datetime A calendar date (day, month, year) and time (hour, minute, second, millisecond) ranging from January 3, 0001, 00:00:00.000 to December 31, 9999, 23:59:59.999 float32 32 bit floating point number with a range of 1.5 x 10-45 to 3.4 x 1038 When using floating point numbers, resist the temptation to save space by using float32. At some future point it is easy to forget that a floating point number is a float32 and does not have the full precision afforded by float64. Use float64 instead. float32x3 Vectorsof 2, 3 or 4 numbers, each of which is a float32 float64 64 bit floating point number with a range of 5.0 x 10-324 to 1.7 x 10308 When using floating point numbers it is a good idea to use float64, since float64 provides full precision. Manifold uses float64 internally for maximum precision. float64x3 Vectors of 2, 3 or 4 numbers, each of which is a float64 geom Manifold geometry. geom is the native Manifold geometry type. geommfd is the equivalent written in binary for storage in external databases. Manifold geometry types can mix area, line, and point objects, including multi-branched versions of such objects, and can also include in the mix objects created from curvilinear segments such as spline arcs. Manifold geometry supports geommfd both 3D and 2D geometry. geomwkb OGC (Open Geospatial Consortium) WKB ("Well Known" Binary) geometry, limited to the OGC Simple Feature specification of point, line, and polygon objects. int8 8 bit signed integer for values from -128 to 127 int8x3 Vectors of 2, 3 or 4 numbers, each of which is an int8 int16 16 bit signed integer for values from -32,768 to 32,767 int16x3 Vectors of 2, 3 or 4 numbers, each of which is an int16 int32 32 bit signed integer for values from -2,147,483,648 to 2,147,483,647 int32x3 Vectors of 2, 3 or 4 numbers, each of which is an int32 int64 64 bit signed integer for values from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 int64x3 Vectors of 2, 3 or 4 numbers, each of which is an int64 nvarchar Variable length Unicode text using the UNICODE UCS-2 character set, two bytes per character. 2 GB maximum storage size for a total of 1 GB characters. tile A Manifold array of numbers of specified size, such as 128 x 128, where each number in the array is an allowed Manifold numeric type, including vector numeric types such as uint8x3. Tiles are used to store raster data. uint8 8 bit unsigned integer for values from 0 to 255 uint8x3 Vectors of 2, 3 or 4 numbers, each of which is a uint8 uint16 16 bit unsigned integer for values from 0 to 65,535 uint16x3 Vectors of 2, 3 or 4 numbers, each of which is a uint16 uint32 32 bit unsigned integer for values from 0 to 4,294,967,295 uint32x3 Vectors of 2, 3 or 4 numbers, each of which is a uint32 uint64 64 bit unsigned integer for values from 0 to 18,446,744,073,709,551,615 uint64x3 Vectors of 2, 3 or 4 numbers, each of which is a uint64 uuid Universally Unique IDentifier (UUID), a 128 bit value represented by a text string of lower-case hexadecimal digits in standard form of groups of digits separated by hyphens. Each UUID is varbinary Variable length binary data. Maximum storage size is 2 GB varchar Variable length non-Unicode text, one byte per character. 2 GB maximum storage size for a total of 2 GB characters. Tooltips on table column headers will show the name of the field as well as the data type: Layers Pane and Data Types Show Type to show data types for each field in the Layers pane. The Layers pane is a convenient way to have an "always open" display, if desired, of data types for all fields in a table. Automatic Conversions and Rounding Automatic conversions, for example, using CAST always round down from FLOATxxx to INTxxx data types, so a floating point number that is 39.95 will be CAST into an integer as 39, not as 40. To convert a floating point number to an integer using conventional rounding up and down, first use the Round function to round the floating point number up or down and then convert. VARCHAR and NVARCHAR Data Types for Databases Dataports for databases expose all text fields as NVARCHAR (Unicode) even if they are stored as VARCHAR. Attempting to create a VARCHAR field will create it as VARCHAR on the database, but the field will look like an NVARCHAR data type in Manifold. What VARCHAR means varies between data sources and converting between different meanings frequently loses data, so this convention helps to preserve Consider an example: In Manifold SQL, VARCHAR means 'characters in the currently active codepage, whatever that currently active codepage may be.' However, in databases VARCHAR usually means 'characters in the codepage associated with the field / table / database'. If we connect to a database, we can ask it to return data as VARCHAR, and the database might return data in a German codepage if a German code page was used for that field. If we then try to use that data as VARCHAR on an English system, the characters will be interpreted wrongly. That will affect both correct display of the characters as well as comparisons and orderings. Treating the data as NVARCHAR, and doing the conversion on the fly, avoids such wrong interpretation. Here is why: Different client systems use different ways of dealing with such issues. For example, Manifold Release 8, an older product, allows setting a codepage for the field. However, the end result of such adaptations is to convert the data in all codepages different from the default code page into Unicode and then handle data as Unicode. If Unicode ends up being the intermediate form, it is better, as newer Manifold products do, to simply make the conversion as close to the data as possible. See related comments in the DBMS Data Sources - Notes topic. Why "N"varchar? - Why the n prefix for Unicode versions of varchar? Before Unicode got traction, character sets for different languages were known as National Character data sets. MySQL still calls these types national character and national varchar. Manifold and SQL Server call these nvarchar (SQL Server adds an nchar type to match SQL Server's char type), and Oracle calls them nchar and nvarchar2. PostgreSQL and DB2 have char and varchar without "N" versions. See the Collations topic for more on how national languages are handled. Computed fields - Fields in tables can be computed fields, which automatically calculate a value for the field based on an expression. See the Example: Add a Computed Field to a Table topic for a step by step example. Geometry collections - Reading geometry collection values automatically merges individual values of the same underlying type used in Manifold geometry, such as area, line or point, with differences between subtypes such as line and multiline being ignored. The result of the merge is returned. This applies to all data which support geometry collection values, including WKB, GeoJSON, JSON, native geometry in database-specific formats, and so on. Reading geometry collection values with individual values of mixed underlying types automatically converts areas to lines and lines to points in order to return all coordinates. Example: reading a geometry collection with an area and several points will return a multipoint containing all coordinates of all individual values. 3D conversions - Geometry values with mixed 2Dand 3Dcoordinates in GML, GeoJSON, and TopoJSON are automatically converted to 3D with 2D coordinates padded with zeros. See Also
{"url":"https://manifold.net/doc/mfd9/data_types.htm","timestamp":"2024-11-06T10:43:09Z","content_type":"text/html","content_length":"37984","record_id":"<urn:uuid:074ef517-2b34-414f-8f31-8f49543b5ad3>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00661.warc.gz"}
Java Program 13: Java Program To Find If Given Numbers Are Coprime Problem: Write a Java program to find if given two numbers are coprime. Definition of coprime: As per Wikipedia, • In number theory, two integers a and b are said to be relatively prime, mutually prime, or coprime (also written co-prime) if the only positive integer (factor) that divides both of them is 1. Consequently, any prime number that divides one does not divide the other. This is equivalent to their greatest common divisor being 1. • As specific examples, 14 and 15 are coprime, being commonly divisible only by 1, while 14 and 21 are not coprime, because they are both divisible by 7. • Read more about coprime numbers here. • First we need to find divisors of input numbers. Finding divisors of a number, we can learn in previous post. • We will store all divisors in a collection class separately. • We will find common elements from both collections. • If common element is found as “1”, we will call given numbers as coprime. Java code: Note: I have not handled all the scenarios like when both input numbers are same or input numbers are one or negative etc. You can add validation in above program to make it more perfect. I have just shared the logic. If you have any other logic of solving above problem, please comment. It is always better to know more logic. If you like my posts, please like, comment, share and subscribe.
{"url":"https://makeseleniumeasy.com/2017/10/05/java-program-13-java-program-to-find-if-given-numbers-are-coprime/","timestamp":"2024-11-09T09:41:49Z","content_type":"text/html","content_length":"40228","record_id":"<urn:uuid:bc221145-22bc-45a7-ac33-84e2d6c50b71>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00896.warc.gz"}
Disjoint-set Data Structures Discuss this article in the forums Many times the efficiency of an algorithm depends on the data structures used in the algorithm. A wise choice in the structure you use in solving a problem can reduce the time of execution, the time to implement the algorithm and the amount of memory used. During SRM competitions we are limited to a time limit of 2 seconds and 64 MB of memory, so the right data structure can help you remain in competition. While some Data Structures have been covered before, in this article we’ll focus on data structures for disjoint sets. The problem Let’s consider the following problem: In a room are N persons, and we will define two persons are friends if they are directly or indirectly friends. If A is a friend with B, and B is a friend with C, then A is a friend of C too. A group of friends is a group of persons where any two persons in the group are friends. Given the list of persons that are directly friends find the number of groups of friends and the number of persons in each group. For example N = 5 and the list of friends is: 1-2, 5-4, and 5-1. Here is the figure of the graph that represents the groups of friends. 1 and 2 are friends, then 5 and 4 are friends, and then 5 and 1 are friends, but 1 is friend with 2; therefore 5 and 2 are friends, etc. In the end there are 2 groups of friends: one group is {1, 2, 4, 5}, the other is {3}. The solution This problem can be solved using BFS, but let’s see how to solve this kind of problem using data structures for disjoint sets. First of all: a disjoint-set data structure is a structure that maintains a collection S1, S2, S3, …, Sn of dynamic disjoint sets. Two sets are disjoint if their intersection is null. For example set {1, 2, 3} and set {1, 5, 6} aren’t disjoint because they have in common {1}, but the sets {1, 2, 3} and {5, 6} are disjoint because their intersection is null. In a data structure of disjoint sets every set contains a representative, which is one member of the Let’s see how things will work with sets for the example of the problem. The groups will be represented by sets, and the representative of each group is the person with the biggest index. At the beginning there are 5 groups (sets): {1}, {2}, {3}, {4}, {5}. Nobody is anybody’s friend and everyone is the representative of his or her own group. The next step is that 1 and 2 become friends, this means the group containing 1 and the group with 2 will become one group. This will give us these groups: {1, 2} , {3}, {4}, {5}, and the representative of the first group will become 2. Next, 5 and 4 become friends. The groups will be {1,2}, {3}, {4, 5}. And in the last step 5 and 1 become friends and the groups will be {1, 2, 4, 5}, {3}. The representative of the first group will be 5 and the representative for second group will be 3. (We will see why we need representatives later). At the end we have 2 sets, the first set with 4 elements and the second with one, and this is the answer for the problem example: 2 groups, 1 group of 4 and one group of one. Perhaps now you are wondering how you can check if 2 persons are in the same group. This is where the use of the representative elements comes in. Let’s say we want to check if 3 and 2 are in the same group, we will know this if the representative of the set that contains 3 is the same as the representative of the set that contains 2. One representative is 5 and the other one is 3; therefore 3 and 2 aren’t in same groups of friends. Some operations Let’s define the following operations: • CREATE-SET(x) – creates a new set with one element {x}. • MERGE-SETS(x, y) – merge into one set the set that contains element x and the set that contains element y (x and y are in different sets). The original sets will be destroyed. • FIND-SET(x) – returns the representative or a pointer to the representative of the set that contains element x. The solution using these operations Let’s see the solution for our problem using these operations: Read N; for (each person x from 1 to N) CREATE-SET(x) for (each pair of friends (x y) ) if (FIND-SET(x) != FIND-SET(y)) MERGE-SETS(x, y) Now if we want to see if 2 persons (x, y) are in same group we check if FIND-SET(x) == FIND-SET(y). We will analyze the running time of the disjoint-set data structure in terms of N and M, where N is the number of times that CREATE-SET(x) is called and M is the total number of times that CREATE-SET (x), MERGE-SETS(x, y) and FIND-SET(x) are called. Since the sets are disjoint, each time MERGE-SETS(x, y) is called one set will be created and two will be destroyed, giving us one less set. If there are n sets after n-1 calls of MERGE-SETS(x,y) there will remain only one set. That’s why the number of MERGE-SETS(x,y) calls is less than or equal to the number of CREATE-SET(x) operations. Implementation with linked lists One way to implement disjoint set data structures is to represent each set by a linked list. Each element (object) will be in a linked list and will contain a pointer to the next element in the set and another pointer to the representative of the set. Here is a figure of how the example of the problem will look like after all operations are made. The blue arrows are the pointers to the representatives and the black arrows are the pointers to the next element in the sets. Representing sets with linked lists we will obtain a complexity of O(1) for CREATE-SET(x) and FIND-SET(x). CREATE-SET(x) will just create a new linked list whose only element (object) is x, the operation FIND-SET(x) just returns the pointer to the representative of the set that contains element (object) Now let’s see how to implement the MERGE-SETS(x, y) operations. The easy way is to append x’s list onto the end of y’s list. The representative of the new set is the representative of the original set that contained y. We must update the pointer to the representative for each element (object) originally on x’s list, which takes linear time in terms of the length of x’s list. It’s easy to prove that, in the worst case, the complexity of the algorithm will be O(M^2) where M is the number of operations MERGE-SETS(x, y). With this implementation the complexity will average O(N) per operation where N represents the number of elements in all sets. The “weighted union heuristic” Let’s see how a heuristic will make the algorithm more efficient. The heuristic is called “a weighted-union heuristic.” In this case, let’s say that the representative of a set contains information about how many objects (elements) are in that set as well. The optimization is to always append the smaller list onto the longer and, in case of ties, append arbitrarily. This will bring the complexity of the algorithm to O(M + NlogN) where M is the number of operations (FIND-SET, MERGE-SETS, CREATE-SETS) and N is the number of operations CREATE-SETS. I will not prove why the complexity is this, but if you are interested you can find the proof in the resources mentioned at the end of the article. So far we reach an algorithm to solve the problem in O(M + NlogN) where N is the number of persons and M is the number of friendships and a memory of O(N). Still the BFS will solve the problem in O(M + N) and memory of O(N + M). We can see that we have optimized the memory but not the execution time. Next step: root trees The next step is to see what we can do for a faster implementation of disjoint set data structures. Let’s represent sets by rooted trees, with each node containing one element and each tree representing one set. Each element will point only to its parent and the root of each tree is the representative of that set and its own parent. Let’s see, in steps, how the trees will look for the example from the problem above. Step 1: nobody is anybody friend We have 5 trees and each tree has a single element, which is the root and the representative of that tree. Step 2: 1 and 2 are friends, MERGE-SETS(1, 2): The operation made is MERGE-SETS(1, 2). We have 4 trees one tree contain 2 elements and have the root 1. The other trees have a single element. Step 3: 5 and 4 are friends, MERGE-SETS(5, 4) The operation made is MERGE-SETS(5, 4). Now we have 3 trees, 2 trees with 2 elements and one tree with one element. Step 4: 5 and 1 are friends, MERGE-SETS(5, 1) The operation made is MERGE-SETS(5, 1). Now we have 2 trees, one tree has 4 elements and the other one has only one element. As we see so far the algorithm using rooted trees is no faster than the algorithm using linked lists. Two heuristics Next we will see how, by using two heuristics, we will achieve the asymptotically fastest disjoint set data structure known so far, which is almost linear in terms of the number of operations made. These two heuristics are called “union by rank” and “path compression.” The idea in the first heuristic “union by rank” is to make the root of the tree with fewer nodes point to the root of the tree with more nodes. For each node, we maintain a rank that approximates the logarithm of the sub-tree size and is also an upper bound on the height of the node. When MERGE-SETS(x, y) is called, the root with smaller rank is made to point to the root with larger rank. The idea in the second heuristic “path compression,” which is used for operation FIND-SET(x), is to make each node on the find path point directly to the root. This will not change any ranks. To implement a disjoint set forest with these heuristics, we must keep track of ranks. With each node x, we keep the integer value rank[x], which is bigger than or equal to the number of edges in the longest path between node x and a sub-leaf. When CREATE-SET(x) is called the initial rank[x] will be 0. When a MERGE-SETS(x, y) operation is made then the root of higher rank will become the parent of the root of lower rank – or, in case of tie, we arbitrarily choose one of the roots as the parent and increment its rank. Let’s see how the algorithm will look. Let P[x] = the parent of node x. P[x] = x rank[x] = 0 MERGE-SETS(x, y) PX = FIND-SET(X) PY =FIND-SET(Y) If (rank[PX] > rank[PY]) P[PY] = PX Else P[PX] = PY If (rank[PX] == rank[PY]) rank[PY] = rank[PY] + 1 And the last operation looks like: If (x != P[x]) p[x] = FIND-SET(P[X]) Return P[X] Now let’s see how the heuristics helped the running time. If we use only the first heuristic “union by rank” then we will get the same running time we achieved with the weighted union heuristic when we used lists for representation. When we use both “union by rank” and “path compression,” the worst running time is O( m α(m,n)), where α(m,n) is the very slowly growing inverse of Ackermann’s function. In application α(m,n) <= 4 that’s why we can say that the running time is linear in terms of m, in practical situations. (For more details on Ackermann’s function or complexity see the references below.) Back to the problem The problem from the beginning of the article is solvable in O(N + M) and O(N) for memory using disjoint-set data structure. The difference for time execution is not big if the problem is solved with BFS, but we don’t need to keep in memory the vertices of the graph. Let’s see if the problem was like: In a room are N persons and you had to handle Q queries. A query is of the form “x y 1,” meaning that x is friends with y, or “x y 2” meaning that we are asked to output if x and y are in same group of friends at that moment in time. In this case the solution with disjoint-set data structure is the fastest, giving a complexity of O(N + Q). Disjoint-set data structures are a helpful tool for use in different algorithms, or even for solving problems in an SRM. They are efficient and use small amount of memory. They are useful in applications like “Computing the shorelines of a terrain,” “Classifying a set of atoms into molecules or fragments,” “Connected component labeling in image analysis,” and others. To practice what you’ve learned, try to solve GrafixMask – the Division 1 500 from SRM211. The idea is to keep track of all the blocks and consider each grid point as a node. Next, take all the nodes that aren’t blocked and let (x, y) be the coordinate of the left, right, down or up node, and if (x, y) is not blocked then you do the operation MERGE-SETS(node, node2). You should also try to determine how disjoint-set data structures can be used in the solution of RoadReconstruction from SRM 356. Disjoint-set data structures can also be used in TopographicalImage from SRM 210 and PathFinding, from SRM 156. I hope you find this data structure to be useful. Good luck in the Arena!
{"url":"https://www.topcoder.com/thrive/articles/Disjoint-set%20Data%20Structures","timestamp":"2024-11-13T12:51:43Z","content_type":"text/html","content_length":"246362","record_id":"<urn:uuid:e062459d-5c3b-4bf4-95ed-b49357c42ce9>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00537.warc.gz"}
Bresenham's line algorithm for power control When creating a microcontroller devices periodically there is a problem of regulation of some of the analog value, such as voltage at the IC pin, LED brightness, power, etc. In order to form an analog signal with a given amplitude at the pin of chip commonly used method of pulse-width modulation - PWM. The basis of the PWM output is on pin IC pulses with varying duty cycle. The higher the duty cycle D, the higher the amplitude of the signal after passing through a pulse integrating RC-chain: The advantage of the PWM is the simplicity of its implementation - most modern microcontrollers have hardware support for PWM. But there are situations in which the PWM does not give the desired result - for example, if need to control the brightness of the LED. That's it look PWM diagrams for different brightness: At each iteration, the PWM gives one positive pulse, the duration of which is proportional to the intensity. As a result, the LED will flash quickly. And because the pulses have a relatively high frequency, and our vision is inert, we take it as a continuous glow. Flicker frequency is equal to the frequency of the PWM and in the case of low brightness and not too high frequencies it may be noticeable for eye. And to get rid of it is necessary to increase the PWM frequency, which is not always possible. For example, when you want to control not a single LED but large LCD display with a dynamic indication. The second way to reduce flicker, a change in the shape of the control voltage. Ie instead of one large pulse should send a series of shorter pulses of the same total length and equally distributed over the entire interval. But how can we do even distribution of pulses over an interval? Search the web gives the answer - should to use the Bresenham's algorithm. The Bresenham line algorithm is an algorithm which determines which order to form a close approximation to a straight line between two given points.. But how is it related to the problem about equally filling the entire interval with pulses?! Let's see. Suppose we need to distribute the M pulses (here M=brightness) to N cells. Let's draw a straight line in the Cartesian coordinate system. X-axis we plot the time (0 .. N-1), the Y-axis line will reach the point (N, M) Ie for brightness 100% (M = N) slope angle is 45 degrees. Grid size: Take a good look at this graph can already see our uniformly distributed impulses. Let us differentiate our line (ie we calculate the difference Y[ i+1 ] - Y[ i ]). Grid size: It remains understand the algorithm. Drawing the line is implemented by pseudocode: function line(x0, x1, y0, y1) int deltax := abs(x1 - x0) int deltay := abs(y1 - y0) int error := 0 int deltaErr := deltay int y := y0 for x from x0 to x1 error := error + deltaErr if 2 * error >= deltax y := y - 1 error := error - deltax Adding calculate the difference the previous value, we obtain a diagram of the pulses: uint8_t bresenham_data[10]; void calcBresenham(uint8_t size, uint8_t brightness) { int error = size - brightness; uint8_t x = 0; uint8_t y = 0; uint8_t prevY = 1; while ( x <= size ) { const int error2 = error * 2; bool value = y != prevY; bresenham_data[x] = value; prevY = y; if ( error2 > -brightness ) { error -= brightness; if ( error2< size ) { error += size; Then this function can be simplified as: void calcBresenham(uint8_t size, uint8_t brightness) { int16_t error = size - brightness; uint8_t x; for (x = 0; x < size; x++) { if ( error < size/2 ) { error += size; bresenham_data[x] = 1; } else { bresenham_data[x] = 0; error -= brightness; Finally, we can change the algorithm a little more and make it into a library file: typedef struct bresenham_struct { uint8_t size; uint8_t value; int16_t error; uint8_t stepNumber; } bresenham_struct; void bresenham_init(struct bresenham_struct *st, uint8_t size) { st->size = size; void bresenham_setValue(struct bresenham_struct *st, uint8_t val) { st->stepNumber = 0; st->value = val; st->error = st->size/2; bool bresenham_getNext(struct bresenham_struct *st) { bool result; st->error -= st->value; if ( st->error < 0 ) { st->error += st->size; result = true; } else { result = false; if ( ++st->stepNumber >= st->size) { st->stepNumber = 0; st->error = st->size/2; return result; Here structure bresenham_struct stores information about the settings and the current state sequence generator Bresenham; The method bresenham_init(st, size) called at the time of initialization and sets the number of partitions of the time axis (the number of degrees of brightness); The method bresenham_setValue(st, value) called to set the brightness. For example, if size = 100 then the brightness can be from 0 to 99; The method bresenham_getNext(st) called periodically by the timer interrupt (or any other way) and returns true if need to generate positive pulse and false otherwise. The result of the latter algorithm can be seen below: Grid size: Click links below to download the file with the implementation of the algorithm (for the AVR GCC) and a file with the test that verifies the validity of generated sequences.
{"url":"https://trolsoft.ru/en/articles/bresenham-algo","timestamp":"2024-11-04T17:09:00Z","content_type":"application/xhtml+xml","content_length":"35487","record_id":"<urn:uuid:4a0d715a-498f-42d1-8f81-75c1d8d635e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00826.warc.gz"}
Maximum along the Boundary Intuition. Setting up the Statement Maximum along the boundary intuition. Setting up the statement The goal of this course is to determine where the maximum value of a functionexists. We will first write down the formula for it, then compute the actual values atwhich this maximum occurs. Before we begin our main topic, we have a quick warm-up. We'll practice findingvectors that are normal to the boundaries of regions. At each point of the boundaryof this region, find a vector that is normal to that boundary like so or like so.Finding a vector normal to a curve. One way is to think about gradients, because weknow that gradients—the gradient is perpendicular to the level curves. So if we havea curve, and if it's the level curve of something, we take the gradient of thatsomething, and that's perpendicular to the level curve. What's our boundary? Ourboundary is this curve, which is a circle with equation x^2+y^2=1. The gradient ofthis function g(x, y) is perpendicular to the boundary. The gradient of g is 2x − 4. Let us conduct a sanity check, in order to ensure that we have not made anyarithmetic errors or overlooked important details. An ideal point to test first would beone of the simplest parts of the problem. Thus, the suggestion is to take (x, commay) to be (0, 0). However, since (0, 0) does not lie on the curve, this is not areasonable choice.The vector (2, 0) is inside the region but not on the boundary. The boundary ishighlighted in purple. The point on the boundary that is easiest to reach is (1, 0). By calculating thegradient of g and plugging in x = 1 and y = 0, we see that the first component of thegradient is negative 2 and the second component of the gradient is 0. The pointwhere these two components meet is (1, 0).The function f of (x, y)—that is, the value of f when x = 0 and y = 0—is 5 minus xsquared minus y minus 1 squared. The gradient is negative 2y plus 2. We are going to find the maximum of this function in this region. And we have animportant clue that the maximum point is roughly over here, and at that point, thegradient is perpendicular to the boundary. To find the maximum point of a sentence,we must convert it into an equation about x and y. We can then solve this equationby manipulating x and y until we arrive at a solution.
{"url":"https://keepnotes.com/mit/multivariable-calculus/158-maximum-along-the-boundary-intuition-setting-up-the-statement","timestamp":"2024-11-02T18:55:00Z","content_type":"text/html","content_length":"127213","record_id":"<urn:uuid:d0440b69-f400-40b8-91b2-aa9acc841200>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00251.warc.gz"}
Single Stage Power Regulator With High Power Factor using Active Current Wave Shaping Techniques Volume 03, Issue 04 (April 2014) Single Stage Power Regulator With High Power Factor using Active Current Wave Shaping Techniques DOI : 10.17577/IJERTV3IS040821 Download Full-Text PDF Cite this Publication Lijin K L, Sunil Kumar P R, 2014, Single Stage Power Regulator With High Power Factor using Active Current Wave Shaping Techniques, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 03, Issue 04 (April 2014), • Open Access • Total Downloads : 245 • Authors : Lijin K L, Sunil Kumar P R • Paper ID : IJERTV3IS040821 • Volume & Issue : Volume 03, Issue 04 (April 2014) • Published (First Online): 15-04-2014 • ISSN (Online) : 2278-0181 • Publisher Name : IJERT • License: This work is licensed under a Creative Commons Attribution 4.0 International License Text Only Version Single Stage Power Regulator With High Power Factor using Active Current Wave Shaping Techniques Lijin K L M.Tech Research Scholar Electrical and Electronics Engineering Govt. Engineering College, Idukki Idukki, India Mr. Sunil Kumar P R Asst. Professor Electrical and Electronics Engineering Govt. Engineering College, Idukki Idukki, India AbstractFor low harmonic distortion and high input power factor leads to active current wave shaping techniques. Such circuits consists of input boost converter operated in discontinuous current conduction mode and output dc-dc converter such that flyback converter derived from buck-boost converter such class of power supplies known as single stage single switch power factor correction regulator. Here expression for boost inductance, critical inductance of flyback converter, energy storage capacitance, output capacitance derived .based on that a 50W, 230V, 50 Hz AC/50V DC single stage single switch power factor correction regulator was designed and simulated using orcad 16.5 software package in open loop control and closed loop control, the problem of dc voltage dependency was found eliminated almost. An AC to DC Converter is an integral part of any power supply unit used in all electronic equipments. Usually Power converters use a diode rectifier followed by a bulk capacitor to convert AC voltage to DC voltage. Since these power converters draw pulsed current from the utility line the power factor becomes poor due to high harmonic distortion of the current waveform. Therefore, a power factor correction stage has to be inserted to the existing equipment to achieve a good power factor. Several standard and review articles in the literature have addressed power quality related issues in AC to DC converters. New configurations of power factor corrections are being developed to mitigate the harmonic effects on the input line current and improve the power factor The input power factor correction stage and the output DC to DC converter stage. Continuous efforts to further make these power converters compact and cost effective to learn to the development of new class of power supplies known as Single Stage Single Switch power factor correction regulator, which is the integration of PFC stage and the DC to DC converter stage. It uses only one switch and controller to shape the input current and regulate the output voltage. The energy storage device in between is necessary to absorb and supply the difference between the pulsating instantaneous input power and constant output power A major problem associated with Single Stage Single Switch power factor correction regulator is strong dependency of DC BUS voltage stress across the capacitor with the output load Power unbalance between PFC stage and DC-DC stage is the inherent reason for causing high DC bus voltage stress. Frequency control is other solution proposed to overcome high DC voltage stress But this call for complex control circuit. The concept of series charging, parallel discharging capacitor scheme is another solution. But this call for more component count in the power circuit In this paper a design solution is proposed to avoid the problem of energy unbalance between energy stored during ON period of switch and energy dissipated in the load by optimally sizing the boost inductor. Maximum energy stored in the inductor shall be limited to such a value that this energy matches with maximum output power required. The instant at which maximum power delivered shall be matched with the instant when the input ac voltage is at the peak. Also consider the fact that maximum power is delivered at a duty ratio which is slightly less than the limiting duty ratio (0.5) for DCM operation[1]. Equal Area Criterion is applied between theoretically calculated fundamental component of input ac current and the peak inductor current when TON is maximum. Using this approach the design was carried out, and simulated testing as well as experimental observation showed only a very small rise in DC bus voltage at light load condition, even under open loop [2]. After introducing closed loop control with output voltage as controlled variable and duty ratio as manipulated variable the DC bus voltage dependency was found almost insignificant. Classical line commutated rectifiers suffer from the following disadvantages: • They produce a lagging displacement factor w.r.t the voltage of the utility. • They generate a considerable amount of input current harmonics. These aspects negatively influence both power factor and power quality. The massive use of single-phase power converters has increased the problems of power quality in electrical systems. Power factor is the relationship between working (active) power and total power consumed (apparent power). Essentially, power factor is a measurement of how effectively electrical power is being used.[3] The higher the power factor, the more effectively electrical power is being used and vice versa. A distribution systems operating power is composed of two parts: 1) Active (working) power and 2) Reactive (non-working) magnetising power. The ACTIVE power performs the useful work. The REACTIVE power does not as its only function is to develop magnetic fields required by inductive devices. Generally, power factor decreases (Ø increases) with increased motor loads.[4] Therefore, when more inductive reactive power is needed, more apparent power is also needed. An active approach is the most effective way to correct power factor of electronic supplies. Here, we place a boost converter between the bridge rectifier and the main input capacitors. The converter tries to maintain a constant DC output bus voltage and draws a current that is in phase with and at the same frequency as the line voltage [5]. The paper is organized in the following way. Section II provides Open loop control of S4 PFC regulator. Section III presents closed loop control of S4 PFC regulator. Section IV represents design of S4 PFC regulator and Section V gives Design of a 50W, 230V, 50 Hz, 50 VDC S4 PFC regulators. Simulations are shown in Section VI. 1. OPEN LOOP CONTROL OF S4 PFC REGULATOR Basic configuration of single stage single switch power factor correction regulator with input boost converter and output fly-back converter as shown in the fig .1. When switch s is ON current in inductor Li increases linearly depends on input line voltage, when switch s is OFF the stored energy in inductor to bulk capacitor and to the load. If there is any Fig .1: open loop control of S4 PFC regulator Fig .2:closed loop control of S4 PFC regulator Mode II In this mode, when switch s is turned OFF so current in boost inductor decreases linearly with proportional to the voltage difference between input instantaneous voltage and sum of capacitor voltage and output voltage in transformer primary change in the load will leads to the unbalanced power between input and output,due to this unbalanced power increases output voltage,this can be avoided by closed loop control in L di V 1 dt m sin t Vdc nV2 which on period of switch is reduces automatically when output voltage increases[6] i Vm L1 sin tdt Vdc nV2 dt 2. CLOSED LOOP CONTROL OF S4 PFC REGULATOR Closed loop control configuration of single stage single switch power factor correction regulator with input boost converter and output flyback converter as shown in the fig .2 To explain the working of ircuit,the operation is divided into three modes. Mode I In this mode of operation, switch s is ON so current in boost converter inductor increases linearly, which is Mode III When current in boost inductor reaches zero mode III starts and the Diode D3 conducts, which leads to the transfer the energy from inductor to the output capacitor and then to the load 3. DESIGN OF S4 PFC REGULATOR For design purpose consider a reference current Im sin t and magnitude of this current is chosen such a way that depends proportionally to the instantaneous values of input voltage Pout Vrms Irms A. Design of boost inductor V sin t L di m 1 dt i Vm sin tdt From the circuit diagram current in boost inductor during ON time is ir I1 m cos cos t Where < t < ton (6) Current in boost inductor during OFF time is V V nV Value for the 500 W is obtained as ir I2 m cos ton cos ton t dc 2 t R= = 88.09 = 90 and Lc = 1.2 mH Considering that I1 =0 and maximum current occurs when = 900 C. Calculation of energy storage capacitor Substituting this in equation (1) we get boost inductance as I peak L Vm DT (8) 1 I 2 peak .45 50106 230 1.57 103 2 4.66Amp B. Design of fly back converter For Volt Second transformer Balance Energy stored in the boost inductor is transferred to the energy storage capacitor during second mode of operation V V D nV V 1 D C V 2 L I 2 1 sat 2 f 1 1 nI p 1 D V 2 We get C1 =22.6µf D. Design of output filter Vbase = 230V = 1 pu Pbase = 50W = 1 pu Assuming zero switching losses Pin = Pout = 1 pu From above equation we obtain the equation of critical inductance of flyback converter as This yields 0.2174 A V V R1 D2 n2 L 2 f Z Vbase 2V2 fs 1. DESIGN OF A 50W, 230V,50 HZ,50 VDC S4 PFC 1. Calculation of boost inductor Switching instant is considered as =900 for maximum rising and falling Switching frequency ,fs = 20 KHz Duty ratio D = 0.3(for DCM mode operation D should be Bulk output capacitor may be determined by settling the output ripple constraint by allowing a 5% output ripple and considering the ripple frequency to be twice the line frequency ,we get Vripple .05Vout pu .05 0.2174 0.0108 less than 0.5) 0.01087 230 2.5V Pout Vrms I rms C I2 I 50 0.2174 A I2 is twice the line frequency current equating instantaneous I m 0.3074 A input power to out put dc power for a duty cycle for duty cycle is obtained as I 230 0.2174 .999 A Ipeak = 1.36 A 2 2 50 2.5 =636 µf I peak .3 50 106 230 2 E. Design of magnetic components Maximum energy stored in boost inductor 2. Calculation3.5o8fmHcritical inductance of flyback converter For D =.3 = 1 2 = 0.5×3.58×10-3×1.362 = 3.36×10-3 Joules KwAw a Secondary voltage = × K A NI/J w w Substituting the values in above equation we get 2= 66.37 V Kc = Im/Irms = 2 Let Bmax = 0.2, Ac = 0.4, J = 3×106 A/m2 Aw Ac = 2E Kw Kc JB max output voltage as shown in the fig. 4, but there is no change in the input current Area product(Ap) = KwAw From equation (15) we get Ap=1.98×104 mm4 So core E 42/21/9 is a proper choice Aw = 2.56×100 mm2 Ac = 1.07×100 mm2 So Number of turns, N =LIm/AcBm =3.58×103 ×1.36 107 ×106 ×0.2 =228 turns F. Design of transformer for the flyback converter The transformer used in flyback converter also acts as inductors so it is different from other transformer configurations B swing Bn =0.5 and = 0.84 In = 0.75 and n =N2/N1=1/3=0.333 V0 = nV1(D/1-D) (16) When D= 0.3,V0 = 66.7V Dmax = 0.37 for 100V Dmin = D max/(Dmax+(1-Dmax))×(Vcc max/Vcc min) = 0.328 Fig 3: input current waveforms I -/I + = 1- I =1-0.75 =0.25 1 1 n I ++I1- = 2I1 av /Dmin =2nI2 av/Dmin I + = (2nI /D )/ 1.25 1 0 min = 3.24 A Fig 4: variation in output voltage with load I1 = 2.43 A I2 = I1/n = 7.31 A Bm = B/Bm ,Bm = 0.2 So B = 0.1 T Now power in obtained by Po2 = (V0+ VD)I0((1-Dmin) /Dmin) =102.5 watts Now area product , 1 4D 41 D n 3 3 (Ap) = K w JBf s Substituting values in equation (17),we get Ap = 11.4×104 mm4 Choose a suitable area which has an Ap greater than calculated value E 65/32/13 is a proper choice The equation for calculating number of turns in primary is Fig 5: input current waveform We get N1 = 297 turns and N2 = 99 turns 2. SIMULATION RESULTS Simulation of S4 PFC regulator with circuit parameter having designed values was done in ORCAD software package. Simulation results were satisfies the design intends 1. open loop control when duty ratio is varied output voltage is varied linearly with change in duty ratio and input voltage is sinusoidal which achieve high input power factor fig. 3 shows the sinusoidal input current in open loop control. But any change in the output voltage will leads to the variation in Fig .6 harmonics spectrum of input current, Fig 7. Output voltage From Fig.4, it is cleared that during light load output voltage will increases. From Fig. 6 the fundamental frequency is dominant and other higher frequencies can be neglected. Fig 7 shows that, the output voltage which is a constant value so it gets the name regulator. 2. Closed loop control Closed loop simulation is done by varying the reference voltage, output vary linearly with the reference voltage and input current is almost sinusoidal and in phase with the input voltage which leads to high power factor. Fig 5 shows the input current which is almost sinusoidal 3. EXPERIMENTAL RESULTS 50W, 230V, 50 Hz AC/50V DC single stage single switch power factor correction regulator is built and using IRFPF 50 as switch by taking following circuit parameters Parameter values L1 3.58 mH C1 22.6uF n 0.333 L2 636 uF Fs 20 KHz D 0.3 Load 90 D1N 1190 is used in rectifier section, TL 082 opamp is used in control section as differential amplifier and comparator and ICL 8038 as triangular wave generator Fig .8 hardware configuration of S4 PFC regulator Fig .9 Input current waveform Fig .10 input voltage waveform From figure 9 and 10 input current and voltage is sinusoidaland are in phase so it achieve high power factor and is satisfy all the conditions S4PFC Regulator design by applying EAC to determine value of boost inductance and by using closed loop control is presented. The dc bus voltage stress at light load is found completely eliminated under closed loop operations. Output voltage regulation using duty ratio variations and fixed switching period is the most simple method of control. For normal performance of the converter the duty ratio needs to be limited up to 0.5. Experimental results demonstrate that it is possible for the proposed converter to have fast response and low line current harmonic content. A DCM boost converter is chosen here which draws energy at line frequency. It has inherently low line current harmonics. A fly back converter is used to eliminate the disadvantages of boost rectifier. Thus the voltage stress in the capacitor is reduced and the output voltage is tightly regulated. EAC is used for design of boost inductance. Output voltage regulation using duty ratio variations and fixed switching period is the simplest method of control. Simulation results demonstrate that it is possible for the proposed converter to have fast response and low line current harmonic content. The proposed converter is expected to maintain a good source power factor and tight output voltage regulation without compromising with high DC bus voltage. Moreover, the efficiency of overall power conversion is expected to be high. 1. Bindu prakash and S Prakash, Design of single stage single switch switch mode rectifier based on equal area criterion in 3rd IEEE Conference on Industrial Electronics and Applications, 2008. ICIEA 2008. p.p 2068 2073 2. M. Madigan, R. Erickson and E. Ismail, "Integrated High Quality Rectifier Regulators" in IEEE power ELECTRONICS SPECIALIST CONF… 1992. p.p. 1043 – 1051. 3. Jinrong Qian, Fred C. Lee, "Single – Stage Single – Switch p -f-c Ac/Dc converters with DC – Bus voltage feedback for universal line applications"in IEEE transactions on P.E. vol: 13, No-6 Nov 1998 p.p. 1079 1088 4. Esam Hamid Ismail, Robert Erickson"Single Switch 3 phase PWM low Harmonic Rectifiers" in IEEE Transactions on P.E. vol: 11, No-2, march 1996 p.p. 338 345 5. Manjusha S. Dawande, Gopal K. Dubey, Programmable Input PFC method for SMR" in IEEE Transactions on P.E. vol: 11, No- 4 July 1996 p.p. 585 591 6. A.R. Prasad, P.D. Ziogas. Stefanos Manias, "An active PFC Technique for 3 phase diode rectifiers" IEEE Transaction on P.E. vol: 6, No-1, January1991 p.p. 83 91 You must be logged in to post a comment.
{"url":"https://www.ijert.org/single-stage-power-regulator-with-high-power-factor-using-active-current-wave-shaping-techniques","timestamp":"2024-11-06T23:51:20Z","content_type":"text/html","content_length":"82187","record_id":"<urn:uuid:f501d47f-d0fd-4135-b4ac-15582be34cc2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00356.warc.gz"}
Draw A Quadrilateral That Does Not Belong Draw A Quadrilateral That Does Not Belong - Web 3rd grade math 12.6, draw quadrilaterals. Draw A Quadrilateral That Does Not Belong - Rectangles have special properties that can be very useful in helping you solve a problem. There are two main types: Web [show solution.] special types of quadrilaterals some quadrilaterals go by special names like rectangle, rhombus, and square. There's less history, less convention, more space for student thinking rather than finding out what things are called. But kristin's latest post about a which one doesn't belong image (again. Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. Web teaching 2d shape, i often go for hexagons rather than quadrilaterals. Parallelograms have other unique properties. There are four sides because it's a quadrilateral. In video 12.5, we learn to classify quadrilaterals by the number of pairs of opposite sides that are. Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. Web since square is a rectangle and both square and rectangle are parallelogram, thus, if we draw a quadrilateral which is not a parallelogram then it won't be a square or rectangle How To Draw A Quadrilateral With A Protractor In construction of I would draw a trapezoid because it is with one parallel side and two pairs of opposing angles. In video 12.5, we learn to classify quadrilaterals by the number of pairs of opposite sides that are. I can use attributes to identify shapes. Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw. 10 Best Images of What Shape Does Not Belong Worksheet What Does Not I can use attributes to identify shapes. Web since square is a rectangle and both square and rectangle are parallelogram, thus, if we draw a quadrilateral which is not a parallelogram then it won't be a square or rectangle either. Partition shapes into parts with equal areas. Draw a quadrilateral that does not belong to. 12.6 Draw Quadrilaterals 3rd Grade Dual Immersion Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. I can use attributes to identify shapes. Parallelogram, trapezoid, rhombus, rectangle, and square. There are certain properties of trapezoids that identify them as trapezoids: 4.g.1 draw points, lines, line segments, rays,. Question 1 Draw a rough sketch of a quadrilateral PQRS. Draw Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. Express the area of each part as a unit fraction of the whole. (show solution) question 5 (request help) victor drew a quadrilateral with no right angles and 4 sides of. Sketch a quadrilateral with no right angles and sides all the same Name the quadrilateral she could have drawn. I would draw a trapezoid because it is with one parallel side and two pairs of opposing angles. How to classify a regular polygon in order to classify a regular polygon: Properties of a parallelogram ★ two pairs of opposite congruent sides side ab ab and side dc. Palmer's Ponderings Which One Doesn't Belong and Quadrilaterals Which Web teaching 2d shape, i often go for hexagons rather than quadrilaterals. Partition shapes into parts with equal areas. There are four right angles. A quadrilateral whose opposite sides are not parallel is not a parallelogram. But kristin's latest post about a which one doesn't belong image (again. Parallelograms are quadrilaterals with opposite sides being. Following Learning Quadrilateral Sets Which One Doesn't Belong? When investigating special quadrilaterals and their properties, students find many ways to distinguish quadrilaterals. Study some examples here are some examples of rectangles: Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. Partition shapes into parts with equal areas. Draw a quadrilateral that does not belong. Then explain why. Explain why this shape does not belong to the group. Partition shapes into parts with equal areas. Parallelogram, trapezoid, rhombus, rectangle, and square. Have a look at christopher danielson's video about using hexagons for proof. In video 12.5, we learn to classify quadrilaterals by the number of pairs of opposite sides that are. Then, try. quadrilateral attributes worksheet I can use attributes to identify shapes. This standard says that children need to understand that some shapes are subsets of other shapes, and it gives some specific examples of subsets children need to know. There are also various subcategories of convex quadrilaterals, such as trapezoids, parallelograms, rectangles, rhombi, and squares. Web recognize rhombuses, rectangles,. 12.6 Draw Quadrilaterals 3rd Grade Dual Immersion Question 4 (request help) layla drew a quadrilateral with 4 right angles and 2 pairs of opposite sides that are parallel. Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. All quadrilaterals have 4 sides and 4 angles. Draw a. Draw A Quadrilateral That Does Not Belong Parallelogram, trapezoid, rhombus, rectangle, and square. Draw a quadrilateral that does not belong to any of these group: 2d shapes (various polygons) example/guidance shapes and figures. Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. Some types are also included in the definition of other types! Draw A Quadrilateral That Does Not Belong To Any Of These Group: Draw a quadrilateral that does not belong to this group. Then, try some practice problems. But kristin's latest post about a which one doesn't belong image (again. Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. Thus, The Drawn Quadrilateral Is Given Below. How to classify a regular polygon in order to classify a regular polygon: Name the quadrilateral she could have drawn. There are certain properties of trapezoids that identify them as trapezoids: The term quadrilateral is a really fancy sounding name for a certain kind of polygon. In Video 12.5, We Learn To Classify Quadrilaterals By The Number Of Pairs Of Opposite Sides That Are. Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. Did you know that there are special types of quadrilaterals? Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. 4.g.1 draw points, lines, line segments, rays, angles (right, acute, obtuse), and perpendicular and parallel lines. Express The Area Of Each Part As A Unit Fraction Of The Whole. Web 3rd grade math 12.6, draw quadrilaterals. Web this group of quadrilaterals has no special name of its own, but includes kites, rhombuses, and squares along with other quadrilaterals that have no particular name. Properties of a parallelogram ★ two pairs of opposite congruent sides side ab ab and side dc dc are congruent side ad ad and side bc bc are congruent ★ two pairs of opposite. Web recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories. Draw A Quadrilateral That Does Not Belong Related Post :
{"url":"https://www.masterlearnings.com/post/draw-a-quadrilateral-that-does-not-belong.html","timestamp":"2024-11-01T22:29:59Z","content_type":"application/xhtml+xml","content_length":"26676","record_id":"<urn:uuid:61ba8159-1188-46e6-83a9-34d937495304>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00352.warc.gz"}
Joint models for continuous outcomes Objectives: learn how to implement a joint model for continuous PKPD data. Projects: warfarinPK_project, warfarinPKPDlibrary_project, warfarinPKPDelibrary_project, warfarin_PKPDimmediate_project, warfarin_PKPDeffect_project, warfarin_PKPDturnover_project, warfarin_PKPDseq1_project, warfarin_PKPDseq2_project, warfarinPD_project A “joint model” describes two or more types of observation that typically depend on each other. A PKPD model is a “joint model” because the PD depends on the PK. Here we demonstrate how several observations can be modeled simultaneously. We also discuss the special case of sequential PK and PD modelling, using either the population PK parameters or the individual PK parameters as an input for the PD model. Fitting first a PK model to the PK data • warfarinPK_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) The column DV of the data file contains both the PK and the PD measurements: the key-word Y is used by Monolix for this column. The column DVID is a flag defining the type of observation: DVID=1 for PK data and DVID=2 for PD data: the keyword YTYPE is then used for this column. We will use the model oral1_1cpt_TlagkaVCl from the Monolix PK library input = {Tlag, ka, V, Cl} Cc = pkmodel(Tlag, ka, V, Cl) output = Cc Only the predicted concentration Cc is defined as an output of this model. Then, this prediction will be automatically associated to the outcome of type 1 (DVID=1) while the other observations (DVID= 2) will be ignored. Remark: any other ordered values could be used for YTYPE: the smallest one will always be associated to the first prediction defined in the model. Simultaneous PKPD modeling using the PK and PD libraries See project warfarinPKPDlibrary_project from demo folder 1.creating_and_using_models/libraries_of_models to see how to link the PK model to an immediate response model from the Monolix PD library for modelling simultaneously the PK and the PD warfarin data. See project warfarinPKPDelibrary_project from demo folder 1.creating_and_using_models/libraries_of_models to see how to add an effect compartement by using the PKe and PDe Monolix libraries. Simultaneous PKPD modeling using a user defined model • warfarin_PKPDimmediate_project (data = ‘warfarin_data.txt’, model = ‘immediateResponse_model.txt’) Is is also possible for the user to write his own PKPD model. The same PK model used previously and an immediate response model are defined in the model file immediateResponse_model.txt input = {Tlag, ka, V, Cl, Imax, IC50, S0} Cc = pkmodel(Tlag, ka, V, Cl) E = S0 * (1 - Imax*Cc/(Cc+IC50) ) output = {Cc, E} Two predictions are now defined in the model: Cc for the PK (DVID=1) and E for the PD (DVID=2). • warfarin_PKPDeffect_project (data = ‘warfarin_data.txt’, model = ‘effectCompartment_model.txt’) An effect compartment is defined in the model file effectCompartment_model.txt input = {Tlag, ka, V, Cl, ke0, Imax, IC50, S0} {Cc, Ce} = pkmodel(Tlag, ka, V, Cl, ke0) E = S0 * (1 - Imax*Ce/(Ce+IC50) ) output = {Cc, E} Ce is the concentration in the effect compartment • warfarin_PKPDturnover_project (data = ‘warfarin_data.txt’, model = ‘turnover1_model.txt’) An indirect response (turnover) model is defined in the model file turnover1_model.txt input = {Tlag, ka, V, Cl, Imax, IC50, Rin, kout} Cc = pkmodel(Tlag, ka, V, Cl) E_0 = Rin/kout ddt_E = Rin*(1-Imax*Cc/(Cc+IC50)) - kout*E output = {Cc, E} Several implementations of the same pkmodel are possible. For instance, the predicted concentration Cc is defined using equations in turnover2_model.txt instead of using the pkmodel function. The residual error models for the PK and PD measurements are both defined in turnover3_model.txt. Any of these two models can be used instead of turnover1_model.txt by right clicking on the Model file button and selecting Change model. Sequential PKPD modelling In the sequential approach, a PK model is developed and parameters estimated in the first step. For a given PD model, different strategies are then possible for the second step, i.e., for estimating the population PD parameters: Using estimated population PK parameters • warfarin_PKPDseq1_project (data = ‘warfarin_data.txt’, model = ‘turnover1_model.txt’) Population PK parameters are set to their estimated values but individual PK parameters are not assumed to be known and sampled from their conditional distributions at each SAEM iteration. In Monolix, this simply means changing the status of the population PK parameter values so that they are no longer used as initial estimates for SAEM but considered fixed. The joint PKPD model defined in turnover1_model.txt is again used with this project. Using estimated individual PK parameters • warfarin_PKPDseq2_project (data = ‘warfarinSeq_data.txt’, model = ‘turnoverSeq_model.txt’) Individual PK parameters are set to their estimated values and used as constants in the PKPD model for the fitting the PD data. In this example, individual PK parameters $(\psi_i)$ were estimated as the modes of the conditional distributions $(p(\psi_i | y_i, \hat{\theta}))$. An additional column MDV is necessary in the datafile in order to ignore the PK data The estimated individual PK parameters are defined as regression variables, using the reserved keyword X. The covariates used for defining the distribution of the individual PK parameters are We use the same turnover model for the PD data. Here, the PK parameters are defined as regression variables (i.e. regressors). input = {Imax, IC50, Rin, kout, Tlag, ka, V, Cl} Tlag = {use = regressor} ka = {use = regressor} V = {use = regressor} Cl = {use = regressor} Cc = pkmodel(Tlag,ka,V,Cl) E_0 = Rin/kout ddt_E= Rin*(1-Imax*Cc/(Cc+IC50)) - kout*E output = E Fitting a PKPD model to the PD data only • warfarinPD_project (data = ‘warfarinPD_data.txt’, model = ‘turnoverPD_model.txt’) In this example, only PD data are available: Nevertheless, a PKPD model – where only the effect is defined as a prediction – can be used for fitting this data: input = {Tlag, ka, V, Cl, Imax, IC50, Rin, kout} Cc = pkmodel(Tlag, ka, V, Cl) E_0 = Rin/kout ddt_E = Rin*(1-Imax*Cc/(Cc+IC50)) - kout*E output = E Case studies • 8.case_studies/PKVK_project (data = ‘PKVK_data.txt’, model = ‘PKVK_model.txt’) • 8.case_studies/hiv_project (data = ‘hiv_data.txt’, model = ‘hivLatent_model.txt’)
{"url":"https://monolix2016.lixoft.com/data-and-models/jointmodel1/","timestamp":"2024-11-08T17:13:49Z","content_type":"text/html","content_length":"75686","record_id":"<urn:uuid:0f9ff654-1c2f-45e5-ac6a-37e52bc7d0db>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00152.warc.gz"}
D.C and A.C Bridges D.C & A.C Bridges Bridge circuits are used very commonly as a variable conversion element in measurement systems and produce an output in the form of a voltage level that changes as the measured physical quantity changes. They provide an accurate method of measuring resistance, inductance and capacitance values, and enable the detection of very small changes in these quantities about a nominal value. They are of immense importance in measurement system technology because so many transducers measuring physical quantities have an output that is expressed as a change in resistance, inductance or capacitance. The displacement-measuring strain gauge, which has a varying resistance output, is but one example of this class of transducers. Normally, excitation of the bridge is by a d.c. voltage for resistance measurement and by an a.c. voltage for inductance or capacitance measurement. Both null and deflection types of bridge exist, and, in a like manner to instruments in general, null types are mainly employed for calibration purposes and deflection types are used within closed-loop automatic control schemes. Null-type, d.c. bridge (Wheatstone bridge) A null-type bridge with d.c. excitation, commonly known as a Wheatstone bridge, has the form shown in Figure 7.1. The four arms of the bridge consist of the unknown resistance Ru, two equal value resistors R2 and R3 and a variable resistor Rv (usually a decade resistance box). A d.c. voltage Vi is applied across the points AC and the resistance Rv is varied until the voltage measured across points BD is zero. This null point is usually measured with a high sensitivity galvanometer. To analyses the Whetstone bridge, define the current flowing in each arm to be I1 . . . I4 as shown in Figure 7.1. Normally, if a high impedance voltage-measuring instrument is used, the current Im drawn by the measuring instrument will be very small and can be approximated to zero. If this assumption is made, then, for Im D 0: I1 =I3 and I2 =I4 Deflection-type d.c. bridge A deflection-type bridge with d.c. excitation is shown in Figure 7.2. This differs from the Wheatstone bridge mainly in that the variable resistance Rvis replaced by a fixed resistance R1 of the same value as the nominal value of the unknown resistance Ru . As the resistance Ru changes, so the output voltage V0 varies, and this relationship between V0 and Ru must be calculated. This relationship is simplified if we again assume that a high impedance voltage measuring instrument is used and the current drawn by it, Im , can be approximated to zero. (The case when this assumption does not hold is covered later in this section.) The analysis is then exactly the same as for the preceding example of the Wheatstone bridge, except that Rv is replaced by R1. Thus, from equation (7.1), we have: V0= Vi * ( Ru / Ru + R3)- ( R1 / R1+ R2) When Ru is at its nominal value, i.e. for Ru D R1, it is clear that V0 D0 (since R2 D R3). For other values of Ru, V0 has negative and positive values that vary in a non-linear way with Ru. A.C bridges Bridges with a.c. excitation are used to measure unknown impedances. As for d.c. bridges, both null and deflection types exist, with null types being generally reserved for calibration duties. Null-type impedance bridge A typical null-type impedance bridge is shown in Figure 7.7. The null point can be conveniently detected by monitoring the output with a pair of headphones connected via an operational amplifier across the points BD. This is a much cheaper method of null detection than the application of an expensive galvanometer that is required for a d.c. Wheatstone bridge. If Zu i s capacitive, i.e. Zu D 1/jωCu, then Zv m u s t consist of a variable capacitance box, which is readily available. If Zu is inductive, then Zu D Ru C jωLu . Notice that the expression for Zu as an inductive impedance has a resistive term in it because it is impossible to realize a pure inductor. An inductor coil always has a resistive component, though this is made as small as possible by designing the coil to have a high Q factor (Q factor is the ratio inductance/resistance). Therefore, Zv must consist of a variable-resistance box and a variable-inductance box. However, the latter are not readily available because it is difficult and hence expensive to manufacture a set of fixed value inductors to make up a variable-inductance box. For this reason, an alternative kind of null-type bridge circuit, known as the Maxwell Bridge, is commonly used to measure unknown inductances. Maxwell bridge A Maxwell bridge (in long form, a Maxwell-Wien bridge) is a type of Wheatstone bridge used to measure an unknown inductance (usually of low Q value) in terms of calibrated resistance and capacitance. It is a real product bridge. The maxwell bridge is used to measure unknown inductance in terms of calibrated resistance and capacitance. Calibration-grade inductors are more difficult to manufacture than capacitors of similar precision, and so the use of a simple "symmetrical" inductance bridge is not always practical. Circuit Diagram With reference to the picture, in a typical application R1 and R4 are known fixed entities, and R2 and C2 are known variable entities. R2 and C2 are adjusted until the bridge is balanced.R3 and L3 can then be calculated based on the values of the other components: As shown in Figure, one arm of the Maxwell bridge consists of a capacitor in parallel with a resistor (C1 and R2) and another arm consists of an inductor L1 in series with a resistor (L1 and R4). The other two arms just consist of a resistor each (R1 and R3). The values of R1 and R3 are known, and R2 and C1 are both adjustable. The unknown values are those of L1 and R4. Like other bridge circuits, the measuring ability of a Maxwell Bridge depends on 'Balancing' the circuit. Balancing the circuit in Figure 1 means adjusting C1 and R2 until the current through the bridge between points A and B becomes zero. This happens when the voltages at points A and B are equal. Z1 = R2 + 1/ (2πfC1); while Z2 = R4 + 2πfL1. (R2 + 1/ (2πfC1)) / R1 = R3 / [R4 + 2πfL1]; R1R3 = [R2 + 1/ (2πfC1)] [R4 + 2πfL1] To avoid the difficulties associated with determining the precise value of a variable capacitance, sometimes a fixed-value capacitor will be installed and more than one resistor will be made The additional complexity of using a Maxwell bridge over simpler bridge types is warranted in circumstances where either the mutual inductance between the load and the known bridge entities, or stray electromagnetic interference, distorts the measurement results. The capacitive reactance in the bridge will exactly oppose the inductive reactance of the load when the bridge is balanced, allowing the load's resistance and reactance to be reliably determined. The frequency does not appear Wide range of inductance Limited measurement It requires variable standard capacitor A Schering Bridge is a bridge circuit used for measuring an unknown electrical capacitance and its dissipation factor. The dissipation factor of a capacitor is the the ratio of its resistance to its capacitive reactance. The Schering Bridge is basically a four-arm alternating- current (AC) bridge circuit whose measurement depends on balancing the loads on its arms Figure 1 below shows a diagram of the Schering Bridge. In the Schering Bridge above, the resistance values of resistors R1 and R2 are known, while the resistance value of resistor R3 is unknown. The capacitance values of C1 and C2 are also known, while the capacitance of C3 is the value being measured. To measure R3 and C3, the values of C2 and R2 are fixed, while the values of R1 and C1 are adjusted until the current through the ammeter between points A and B becomes zero. This happens when the voltages at points A and B are equal, in which case the bridge is said to be 'balanced'. When the bridge is balanced, Z1/C2 = R2/Z3, where Z1 is the impedance of R1 in parallel with C1 and Z3 is the impedance of R3 in series with C3. In an AC circuit that has a capacitor, the capacitor contributes a capacitive reactance to the impedance. When the bridge is balanced, the negative and positive reactive components are equal and cancel out, so Similarly, when the bridge is balanced, the purely resistive components are equal, so C2/C3 = R2/R1 or C3 = R1C2 / R2. Note that the balancing of a Schering Bridge is independent of frequency. Balance equation is independent of frequency Used for measuring the insulating properties of electrical cables and equipment’s A Hay Bridge is an AC bridge circuit used for measuring an unknown inductance by balancing the loads of its four arms, one of which contains the unknown inductance. One of the arms of a Hay Bridge has a capacitor of known characteristics, which is the principal component used for determining the unknown inductance value. Figure 1 below shows a diagram of the Hay Bridge. As shown in Figure 1, one arm of the Hay bridge consists of a capacitor in series with a resistor (C1 and R2) and another arm consists of an inductor L1in series with a resistor (L1 and R4). The other two arms simply contain a resistor each (R1 and R3). The values of R1and R3 are known, and R2 and C1 are both adjustable. The unknown values are those of L1 and R4. Like other bridge circuits, the measuring ability of a Hay Bridge depends on 'balancing' the circuit. Balancing the circuit in Figure 1 means adjusting R2 and C1 until the current through the ammeter between points A and B becomes zero. This happens when the voltages at points A and B are equal. When the Hay Bridge is balanced, it follows that Z1/R1 = R3/Z2 wherein Z1 is the impedance of the arm containing C1 and R2 while Z2 is the impedance of the arm containing L1 and R4. Thus, Z1 = R2 + 1/(2πfC) while Z2 = R4 + 2πfL1. [R2 + 1/(2πfC1)] / R1 = R3 / [R4 + 2πfL1]; or [R4 + 2πfL1] = R3R1 / [R2 + 1/(2πfC1)]; or R3R1 = R2R4 + 2πfL1R2 + R4/2πfC1 + L1/C1. When the bridge is balanced, the reactive components are equal, so 2πfL1R2 = R4/2πfC1, or R4 = (2πf) 2L1R2C1. Substituting R4, one comes up with the following equation: R3R1 = (R2+1/2πfC1) ((2πf) 2L1R2C1) + 2πfL1R2 + L1/C1; or L1 = R3R1C1 / (2πf) 2R22C12 + 4πfC1R2 + 1); L1 = R3R1C1 / [1 + (2πfR2C1)2] After dropping the reactive components of the equation since the bridge is Thus, the equations for L1 and R4 for the Hay Bridge in Figure 1 when it is balanced are: L1 = R3R1C1 / [1 + (2πfR2C1)2]; and R4 = (2πfC1)2R2R3R1 / [1 + (2πfR2C1)2] Simple expression It is not suited for measurement of coil A Wien bridge oscillator is a type of electronic oscillator that generates sine waves. It can generate a large range of frequencies. The circuit is based on an electrical network originally developed by Max Wien in 1891. Wien did not have a means of developing electronic gain so a workable oscillator could not be realized. The modern circuit is derived from William Hewlett's 1939 Stanford University master's degree thesis. Hewlett, along with David Packard co-founded Hewlett-Packard. Their first product was the HP 200A, a precision sine wave oscillator based on the Wien bridge. The 200A was one of the first instruments to produce such low distortion. Amplitude stabilization: The key to Hewlett's low distortion oscillator is effective amplitude stabilization. The amplitude of electronic oscillators tends to increase until clipping or other gain limitation is reached. This leads to high harmonic distortion, which is often undesirable. Hewlett used an incandescent bulb as a positive temperature coefficient (PTC) thermistor in the oscillator feedback path to limit the gain. The resistance of light bulbs and similar heating elements increases as their temperature increases. If the oscillation frequency is significantly higher than the thermal time constant of the heating element, the radiated power is proportional to the oscillator power. Since heating elements are close to black body radiators, they follow the Stefan-Boltzmann law. The radiated power is proportional to T4, so resistance increases at a greater rate than amplitude. If the gain is inversely proportional to the oscillation amplitude, the oscillator gain stage reaches a steady state and operates as a near ideal class A amplifier, achieving very low distortion at the frequency of interest. At lower frequencies the time period of the oscillator approaches the thermal time constant of the thermistor element and the output distortion starts to rise significantly. Light bulbs have their disadvantages when used as gain control elements in Wien bridge oscillators, most notably a very high sensitivity to vibration due to the bulb's micro phonic nature amplitude modulating the oscillator output, and a limitation in high frequency response due to the inductive nature of the coiled filament. Modern Distortion as low as 0.0008% (-100 dB) can be achieved with only modest improvements to Hewlett's original circuit. Wien bridge oscillators that use thermistors also exhibit "amplitude bounce" when the oscillator frequency is changed. This is due to the low damping factor and long time constant of the crude control loop, and disturbances cause the output amplitude to exhibit a decaying sinusoidal response. This can be used as a rough figure of merit, as the greater the amplitude bounce after a disturbance, the lower the output distortion under steady state conditions. Input admittance analysis If Av is greater than 1, the input admittance is a negative resistance in parallel with an inductance. If a resistor is placed in parallel with the amplifier input, it will cancel some of the negative resistance. If the net resistance is negative, amplitude will grow until clipping occurs. If a resistance is added in parallel with exactly the value of R, the net resistance will be infinite and the circuit can sustain stable oscillation at any amplitude allowed by the amplifier. Frequency sensitive Supply voltage is purely sinusoidal
{"url":"https://www.brainkart.com/article/D-C-and-A-C-Bridges_12734/","timestamp":"2024-11-05T19:32:52Z","content_type":"text/html","content_length":"114606","record_id":"<urn:uuid:b882e5bf-ba2b-4d79-88d6-cbaafba9967b>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00382.warc.gz"}
ManPag.es - stbrfs.f − subroutine STBRFS (UPLO, TRANS, DIAG, N, KD, NRHS, AB, LDAB, B, LDB, X, LDX, FERR, BERR, WORK, IWORK, INFO) Function/Subroutine Documentation subroutine STBRFS (characterUPLO, characterTRANS, characterDIAG, integerN, integerKD, integerNRHS, real, dimension( ldab, * )AB, integerLDAB, real, dimension( ldb, * )B, integerLDB, real, dimension( ldx, * )X, integerLDX, real, dimension( * )FERR, real, dimension( * )BERR, real, dimension( * )WORK, integer, dimension( * )IWORK, integerINFO) STBRFS provides error bounds and backward error estimates for the solution to a system of linear equations with a triangular band coefficient matrix. The solution matrix X must be computed by STBTRS or some other means before entering this routine. STBRFS does not do iterative refinement because doing so cannot improve the backward error. UPLO is CHARACTER*1 = ’U’: A is upper triangular; = ’L’: A is lower triangular. TRANS is CHARACTER*1 Specifies the form of the system of equations: = ’N’: A * X = B (No transpose) = ’T’: A**T * X = B (Transpose) = ’C’: A**H * X = B (Conjugate transpose = Transpose) DIAG is CHARACTER*1 = ’N’: A is non-unit triangular; = ’U’: A is unit triangular. N is INTEGER The order of the matrix A. N >= 0. KD is INTEGER The number of superdiagonals or subdiagonals of the triangular band matrix A. KD >= 0. NRHS is INTEGER The number of right hand sides, i.e., the number of columns of the matrices B and X. NRHS >= 0. AB is REAL array, dimension (LDAB,N) The upper or lower triangular band matrix A, stored in the first kd+1 rows of the array. The j-th column of A is stored in the j-th column of the array AB as follows: if UPLO = ’U’, AB(kd+1+i-j,j) = A(i,j) for max(1,j-kd)<=i<=j; if UPLO = ’L’, AB(1+i-j,j) = A(i,j) for j<=i<=min(n,j+kd). If DIAG = ’U’, the diagonal elements of A are not referenced and are assumed to be 1. LDAB is INTEGER The leading dimension of the array AB. LDAB >= KD+1. B is REAL array, dimension (LDB,NRHS) The right hand side matrix B. LDB is INTEGER The leading dimension of the array B. LDB >= max(1,N). X is REAL array, dimension (LDX,NRHS) The solution matrix X. LDX is INTEGER The leading dimension of the array X. LDX >= max(1,N). FERR is REAL array, dimension (NRHS) The estimated forward error bound for each solution vector X(j) (the j-th column of the solution matrix X). If XTRUE is the true solution corresponding to X(j), FERR(j) is an estimated upper bound for the magnitude of the largest element in (X(j) - XTRUE) divided by the magnitude of the largest element in X(j). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. BERR is REAL array, dimension (NRHS) The componentwise relative backward error of each solution vector X(j) (i.e., the smallest relative change in any element of A or B that makes X(j) an exact solution). WORK is REAL array, dimension (3*N) IWORK is INTEGER array, dimension (N) INFO is INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. November 2011 Definition at line 188 of file stbrfs.f. Generated automatically by Doxygen for LAPACK from the source code.
{"url":"https://manpag.es/SUSE131/3+STBRFS","timestamp":"2024-11-07T03:31:55Z","content_type":"text/html","content_length":"22483","record_id":"<urn:uuid:56a9d208-54fa-4a9e-9b67-e212da1e55b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00213.warc.gz"}
Accounting Cost Behavior 8.2. Scatter-graph method The scatter graph method (also called scatter plot or scatter chart method) involves estimating the fixed and variable elements of a mixed cost visually on a graph. The scatter-graph method requires that all recent, normal data observations be plotted on a cost (Y-axis) versus activity (X-axis) graph. The vertical axis of the graph represents the total costs and the horizontal axis shows the volume of related activity. Let us again use the example of Friends Company and review their activities for the past six months (see Illustration 14 from the previous section). The first step is to plot the points on a graph. Then draw a line that most closely represents a straight line composed of all the data points. The graph using data points from Illustration 14 is shown in Illustration 15: Illustration 15: Scatter graph The point where this line intersects the vertical axis is the fixed costs, or $14,000 in our case. The angle (slope) of the line can be calculated to give a fairly accurate estimate of the variable cost per unit. We can see from the graph that production of 20,000 valves will cost Friends Company $75,000 and production of 25,000 valves will cost $90,000. Knowing this information we can calculate the variable cost per unit: Y2 - Y1 $90,000 - $75,000 $15,000 = = = $3 X2 - X1 25,000 - 20,000 5,000 When the two variables become known, we can use them in the cost formula: Y = F + V x X, where F is the fixed cost, V is the variable cost per unit, and X is the production level. So, the cost formula looks like this: Y = $14,000 + $3 x X Using this formula we can calculate the total costs of activity in the range of 10,000 to 28,000 valves per month and then separate them into fixed and variable components. For example, assume that production of 24,000 valves is planned for the next period. Using the formula we can determine that the total costs would be: Y = $14,000 + $3 x 24,000 = $86,000 $14,000 is fixed and $72,000 is variable, for a total of $86,000 ($14,000 + $72,000). This method is simple to use and provides clear representation of correlation between costs and the volume of activity. However, the disadvantage of this method is the difficulty in determining the location of the best-fit line. 8.3. Method of least squares The most robust method of separating mixed costs is the least-squares regression method. This method requires the use of 30 or more past data observations for both the activity level (in units) and the total costs. The method of least squares identifies the line that best fits the data points (the sum of the squared deviations is minimized). This method is the most sophisticated and provides the user with a measure of the goodness of fit, which can be used to assess the usefulness of the cost formula. Usually this method requires the use of software packages, and therefore, will not be discussed in this tutorial. Related accounting tutorials and articles Lecture Contents: Ask a Question Suggest a Topic Do you have an interesting question or topic? Suggest it to be answered on Simplestudies.com:
{"url":"https://simplestudies.com/accounting-cost-behavior.html/page/7","timestamp":"2024-11-02T05:18:13Z","content_type":"application/xhtml+xml","content_length":"26768","record_id":"<urn:uuid:28c03163-e3ee-4428-8e2b-e4d80d0c362e>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00322.warc.gz"}
12.1 Flow Rate and Its Relation to Velocity Chapter 12 Fluid Dynamics and Its Biological and Medical Applications 12.1 Flow Rate and Its Relation to Velocity • Calculate flow rate. • Define units of volume. • Describe incompressible fluids. • Explain the consequences of the equation of continuity. Flow rate [latex]{Q}[/latex] is defined to be the volume of fluid passing by some location through an area during a period of time, as seen in Figure 1. In symbols, this can be written as [latex]{Q\:=}[/latex] [latex]{\frac{V}{t}},[/latex] where [latex]{V}[/latex] is the volume and [latex]{t}[/latex] is the elapsed time. The SI unit for flow rate is [latex]{\text{m}^3\text{/s}},[/latex] but a number of other units for [latex]{Q}[/latex] are in common use. For example, the heart of a resting adult pumps blood at a rate of 5.00 liters per minute (L/min). Note that a liter (L) is 1/1000 of a cubic meter or 1000 cubic centimeters ( [latex]{10^{-3}\text{ m}^3}[/latex] or [latex]{10^3\text{ cm}^3}[/latex] ). In this text we shall use whatever metric units are most convenient for a given situation. Figure 1. Flow rate is the volume of fluid per unit time flowing past a point through the area A. Here the shaded cylinder of fluid flows past point P in a uniform pipe in time t. The volume of the cylinder is Ad and the average velocity is v̄=d/t so that the flow rate is Q=Ad/t=Av̄. Example 1: Calculating Volume from Flow Rate: The Heart Pumps a Lot of Blood in a Lifetime How many cubic meters of blood does the heart pump in a 75-year lifetime, assuming the average flow rate is 5.00 L/min? Time and flow rate [latex]{Q}[/latex] are given, and so the volume [latex]{V}[/latex] can be calculated from the definition of flow rate. Solving [latex]{Q=V/t}[/latex] for volume gives Substituting known values yields [latex]\begin{array}{lcl} {V} & {=} & {(\frac{5.00\text{ L}}{1\text{ min}})(75\text{ y})(\frac{1\text{ m}^3}{10^3\text{L}})(5.26\times10^5\frac{\text{min}}{\text{y}})} \\ {} & {=} & {2.0\times10^5\ text{ m}^3.} \end{array}[/latex] This amount is about 200,000 tons of blood. For comparison, this value is equivalent to about 200 times the volume of water contained in a 6-lane 50-m lap pool. Flow rate and velocity are related, but quite different, physical quantities. To make the distinction clear, think about the flow rate of a river. The greater the velocity of the water, the greater the flow rate of the river. But flow rate also depends on the size of the river. A rapid mountain stream carries far less water than the Amazon River in Brazil, for example. The precise relationship between flow rate [latex]{Q}[/latex] and velocity [latex]{\bar{v}}[/latex] is where [latex]{A}[/latex] is the cross-sectional area and [latex]{\bar{v}}[/latex] is the average velocity. This equation seems logical enough. The relationship tells us that flow rate is directly proportional to both the magnitude of the average velocity (hereafter referred to as the speed) and the size of a river, pipe, or other conduit. The larger the conduit, the greater its cross-sectional area. Figure 1 illustrates how this relationship is obtained. The shaded cylinder has a volume which flows past the point [latex]\text{P}[/latex] in a time [latex]{t}.[/latex] Dividing both sides of this relationship by [latex]{t}[/latex] gives [latex]{\frac{V}{t}}[/latex] [latex]{=}[/latex] [latex]{\frac{Ad}{t}}.[/latex] We note that [latex]{Q=V/t}[/latex] and the average speed is [latex]{v\bar{v}=d/t}.[/latex] Thus the equation becomes [latex]{Q=A\bar{v}}.[/latex] Figure 2 shows an incompressible fluid flowing along a pipe of decreasing radius. Because the fluid is incompressible, the same amount of fluid must flow past any point in the tube in a given time to ensure continuity of flow. In this case, because the cross-sectional area of the pipe decreases, the velocity must necessarily increase. This logic can be extended to say that the flow rate must be the same at all points along the pipe. In particular, for points 1 and 2, [latex]\begin{array}{c} {Q_1=Q_2} \\ {A_1\bar{v}_1=A_2\bar{v}_2.} \end{array}[/latex] [latex]\rbrace[/latex] This is called the equation of continuity and is valid for any incompressible fluid. The consequences of the equation of continuity can be observed when water flows from a hose into a narrow spray nozzle: it emerges with a large speed—that is the purpose of the nozzle. Conversely, when a river empties into one end of a reservoir, the water slows considerably, perhaps picking up speed again when it leaves the other end of the reservoir. In other words, speed increases when cross-sectional area decreases, and speed decreases when cross-sectional area increases. Figure 2. When a tube narrows, the same volume occupies a greater length. For the same volume to pass points 1 and 2 in a given time, the speed must be greater at point 2. The process is exactly reversible. If the fluid flows in the opposite direction, its speed will decrease when the tube widens. (Note that the relative volumes of the two cylinders and the corresponding velocity vector arrows are not drawn to scale.) Since liquids are essentially incompressible, the equation of continuity is valid for all liquids. However, gases are compressible, and so the equation must be applied with caution to gases if they are subjected to compression or expansion. Example 2: Calculating Fluid Speed: Speed Increases When a Tube Narrows A nozzle with a radius of 0.250 cm is attached to a garden hose with a radius of 0.900 cm. The flow rate through hose and nozzle is 0.500 L/s. Calculate the speed of the water (a) in the hose and (b) in the nozzle. We can use the relationship between flow rate and speed to find both velocities. We will use the subscript 1 for the hose and 2 for the nozzle. Solution for (a) First, we solve [latex]{Q=A\bar{v}}[/latex] for [latex]{\bar{v}_1}[/latex] and note that the cross-sectional area is [latex]{A=\pi{r}^2},[/latex] yielding [latex]{\bar{v}_1\:=}[/latex] [latex]{\frac{Q}{A_1}}[/latex] [latex]{=}[/latex] [latex]{\frac{Q}{\pi{r}_1^2}}.[/latex] Substituting known values and making appropriate unit conversions yields [latex]{\bar{v}_1\:=}[/latex] [latex]{\frac{(0.500\text{ L/s})(10^{-3}\text{ m}^3\textbf{/L})}{\pi(9.00\times10^{-3}\text{m})^2}}[/latex] [latex]{=\:1.96\text{ m/s}}.[/latex] Solution for (b) We could repeat this calculation to find the speed in the nozzle [latex]{\bar{v}_2},[/latex] but we will use the equation of continuity to give a somewhat different insight. Using the equation which solving for [latex]{\bar{v}_2}[/latex] and substituting [latex]{\pi{r}^2}[/latex] for the cross-sectional area yields [latex]{\bar{v}_2\:=}[/latex] [latex]{\frac{A_1}{A_2}}[/latex] [latex]{\bar{v}_1\:=}[/latex] [latex]{\frac{\pi{r}_1^2}{\pi{r}_2^2}}[/latex] [latex]{\bar{v}_1\:=}[/latex] [latex]{\frac{r_1^2}{r_2^2}} [/latex] [latex]{\bar{v}_1}.[/latex] Substituting known values, [latex]{\bar{v}_2\:=}[/latex] [latex]{\frac{(0.900\text{ cm})^2}{(0.250\text{ cm})^2}}[/latex] [latex]{1.96\text{ m/s}=25.5\text{ m/s}}.[/latex] A speed of 1.96 m/s is about right for water emerging from a nozzleless hose. The nozzle produces a considerably faster stream merely by constricting the flow to a narrower tube. The solution to the last part of the example shows that speed is inversely proportional to the square of the radius of the tube, making for large effects when radius varies. We can blow out a candle at quite a distance, for example, by pursing our lips, whereas blowing on a candle with our mouth wide open is quite ineffective. In many situations, including in the cardiovascular system, branching of the flow occurs. The blood is pumped from the heart into arteries that subdivide into smaller arteries (arterioles) which branch into very fine vessels called capillaries. In this situation, continuity of flow is maintained but it is the sum of the flow rates in each of the branches in any portion along the tube that is maintained. The equation of continuity in a more general form becomes where [latex]{n_1}[/latex] and [latex]{n_2}[/latex] are the number of branches in each of the sections along the tube. Example 3: Calculating Flow Speed and Vessel Diameter: Branching in the Cardiovascular System The aorta is the principal blood vessel through which blood leaves the heart in order to circulate around the body. (a) Calculate the average speed of the blood in the aorta if the flow rate is 5.0 L /min. The aorta has a radius of 10 mm. (b) Blood also flows through smaller blood vessels known as capillaries. When the rate of blood flow in the aorta is 5.0 L/min, the speed of blood in the capillaries is about 0.33 mm/s. Given that the average diameter of a capillary is [latex]{8.0\:\mu},[/latex] calculate the number of capillaries in the blood circulatory system. We can use [latex]{Q=A\bar{v}}[/latex] to calculate the speed of flow in the aorta and then use the general form of the equation of continuity to calculate the number of capillaries as all of the other variables are known. Solution for (a) The flow rate is given by [latex]{Q=A\bar{v}}[/latex] or [latex]{\bar{v}=\frac{Q}{\pi{r}^2}}[/latex] for a cylindrical vessel. Substituting the known values (converted to units of meters and seconds) gives [latex]{\bar{v}\:=}[/latex] [latex]{\frac{(5.0\text{ L/min})(10^{-3}\text{ m}^3\textbf{/L})(1\text{ min/}60\text{ s})}{\pi(0.010\text{ m})^2}}[/latex] [latex]{=\:0.27\text{ m/s.}}[/latex] Solution for (b) Using [latex]{n_1A_1\bar{v}_1=n_2A_2\bar{v}_1},[/latex] assigning the subscript 1 to the aorta and 2 to the capillaries, and solving for [latex]{n_2}[/latex] (the number of capillaries) gives [latex] {n_2=\frac{n_1A_1\bar{v}_1}{A_2\bar{v}_2}}.[/latex] Converting all quantities to units of meters and seconds and substituting into the equation above gives [latex]{n_2\:=}[/latex] [latex]{\frac{(1)(\pi)(10\times10^{-3}\text{ m})^2(0.27\text{ m/s})}{(\pi)(4.0\times10^{-6}\text{ m})^2(0.33\times10^{-3}\text{ m/s})}}[/latex] [latex]{=\:5.0\times10^9\text{ Note that the speed of flow in the capillaries is considerably reduced relative to the speed in the aorta due to the significant increase in the total cross-sectional area at the capillaries. This low speed is to allow sufficient time for effective exchange to occur although it is equally important for the flow not to become stationary in order to avoid the possibility of clotting. Does this large number of capillaries in the body seem reasonable? In active muscle, one finds about 200 capillaries per [latex]{\text{mm}^3},[/latex] or about [latex]{200\times10^6}[/latex] per 1 kg of muscle. For 20 kg of muscle, this amounts to about [latex]{4\times10^9}[/latex] capillaries. Section Summary • Flow rate [latex]{Q}[/latex] is defined to be the volume [latex]{V}[/latex] flowing past a point in time [latex]{t},[/latex] or [latex]{Q=\frac{V}{t}}[/latex] where [latex]{V}[/latex] is volume and [latex]{t}[/latex] is time. • The SI unit of volume is [latex]{\text{m}^3}.[/latex] • Another common unit is the liter (L), which is [latex]{10^{-3}\text{ m}^3}.[/latex] • Flow rate and velocity are related by [latex]{Q=A\bar{v}}[/latex] where [latex]{A}[/latex] is the cross-sectional area of the flow and [latex]{\bar{v}}[/latex] is its average velocity. • For incompressible fluids, flow rate at various points is constant. That is, [latex]\begin{array}{c} {Q_1=Q_2} \\ {A_1\bar{v}_1=A_2\bar{v}_2} \\ {n_1A_1\bar{v}_1=n_2A_2\bar{v}_2.} \end{array}[/latex] [latex]\rbrace[/latex] Conceptual Questions 1: What is the difference between flow rate and fluid velocity? How are they related? 2: Many figures in the text show streamlines. Explain why fluid velocity is greatest where streamlines are closest together. (Hint: Consider the relationship between fluid velocity and the cross-sectional area through which it flows.) 3: Identify some substances that are incompressible and some that are not. Problems & Exercises 1: What is the average flow rate in [latex]{\text{cm}^3\text{/s}}[/latex] of gasoline to the engine of a car traveling at 100 km/h if it averages 10.0 km/L? 2: The heart of a resting adult pumps blood at a rate of 5.00 L/min. (a) Convert this to [latex]{\text{cm}^3\text{/s}}.[/latex] (b) What is this rate in [latex]{\text{m}^3\text{/s}}?[/latex] 3: Blood is pumped from the heart at a rate of 5.0 L/min into the aorta (of radius 1.0 cm). Determine the speed of blood through the aorta. 4: Blood is flowing through an artery of radius 2 mm at a rate of 40 cm/s. Determine the flow rate and the volume that passes through the artery in a period of 30 s. 5: The Huka Falls on the Waikato River is one of New Zealand’s most visited natural tourist attractions (see Figure 3). On average the river has a flow rate of about 300,000 L/s. At the gorge, the river narrows to 20 m wide and averages 20 m deep. (a) What is the average speed of the river in the gorge? (b) What is the average speed of the water in the river downstream of the falls when it widens to 60 m and its depth increases to an average of 40 m? Figure 3. The Huka Falls in Taupo, New Zealand, demonstrate flow rate. (credit: RaviGogna, Flickr) 6: A major artery with a cross-sectional area of [latex]{1.00\text{ cm}^2}[/latex] branches into 18 smaller arteries, each with an average cross-sectional area of [latex]{0.400\text{ cm}^2}.[/latex] By what factor is the average velocity of the blood reduced when it passes into these branches? 7: (a) As blood passes through the capillary bed in an organ, the capillaries join to form venules (small veins). If the blood speed increases by a factor of 4.00 and the total cross-sectional area of the venules is [latex]{10.0\text{ cm}^2},[/latex] what is the total cross-sectional area of the capillaries feeding these venules? (b) How many capillaries are involved if their average diameter is [latex]{10.0\:\mu\text{m}}?[/latex] 8: The human circulation system has approximately [latex]{1\times10^9}[/latex] capillary vessels. Each vessel has a diameter of about [latex]{8\:\mu\text{m}}.[/latex] Assuming cardiac output is 5 L/ min, determine the average velocity of blood flow through each capillary vessel. 9: (a) Estimate the time it would take to fill a private swimming pool with a capacity of 80,000 L using a garden hose delivering 60 L/min. (b) How long would it take to fill if you could divert a moderate size river, flowing at [latex]{5000\text{ m}^3\text{/s}},[/latex] into it? 10: The flow rate of blood through a [latex]{2.00\times10^{-6}\text{ -m}}[/latex] -radius capillary is [latex]{3.80\times10^9\text{ cm}^3\text{/s}}.[/latex] (a) What is the speed of the blood flow? (This small speed allows time for diffusion of materials to and from the blood.) (b) Assuming all the blood in the body passes through capillaries, how many of them must there be to carry a total flow of [latex]{90.0\text{ cm}^3\text{/s}}?[/latex] (The large number obtained is an overestimate, but it is still reasonable.) 11: (a) What is the fluid speed in a fire hose with a 9.00-cm diameter carrying 80.0 L of water per second? (b) What is the flow rate in cubic meters per second? (c) Would your answers be different if salt water replaced the fresh water in the fire hose? 12: The main uptake air duct of a forced air gas heater is 0.300 m in diameter. What is the average speed of air in the duct if it carries a volume equal to that of the house’s interior every 15 min? The inside volume of the house is equivalent to a rectangular solid 13.0 m wide by 20.0 m long by 2.75 m high. 13: Water is moving at a velocity of 2.00 m/s through a hose with an internal diameter of 1.60 cm. (a) What is the flow rate in liters per second? (b) The fluid velocity in this hose’s nozzle is 15.0 m/s. What is the nozzle’s inside diameter? 14: Prove that the speed of an incompressible fluid through a constriction, such as in a Venturi tube, increases by a factor equal to the square of the factor by which the diameter decreases. (The converse applies for flow out of a constriction into a larger-diameter region.) 15: Water emerges straight down from a faucet with a 1.80-cm diameter at a speed of 0.500 m/s. (Because of the construction of the faucet, there is no variation in speed across the stream.) (a) What is the flow rate in [latex]{\text{ cm}^3\text{/s}}?[/latex] (b) What is the diameter of the stream 0.200 m below the faucet? Neglect any effects due to surface tension. 16: Unreasonable Results A mountain stream is 10.0 m wide and averages 2.00 m in depth. During the spring runoff, the flow in the stream reaches [latex]{100,000\text{ m}^3\text{/s}}.[/latex] (a) What is the average velocity of the stream under these conditions? (b) What is unreasonable about this velocity? (c) What is unreasonable or inconsistent about the premises? flow rate abbreviated Q, it is the volume V that flows past a particular point during a time t, or Q = V/t a unit of volume, equal to 10^−3 m^3 Problems & Exercises [latex]{2.78\text{ cm}^3\text{/s}}[/latex] 27 cm/s (a) 0.75 m/s (b) 0.13 m/s (a) [latex]{40.0\text{ cm}^2}[/latex] (b) [latex]{5.09\times10^7}[/latex] (a) 12.6 m/s (b) [latex]{0.0800\text{ m}^3\text{/s}}[/latex] (c) No, independent of density. (a) 0.402 L/s (b) 0.584 cm (a) [latex]{127\text{ cm}^3\text{/s}}[/latex] (b) 0.890 cm
{"url":"https://pressbooks.online.ucf.edu/algphysics/chapter/flow-rate-and-its-relation-to-velocity/","timestamp":"2024-11-03T09:40:20Z","content_type":"text/html","content_length":"285436","record_id":"<urn:uuid:7e7c4ece-3580-4f67-922d-42b23192027b>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00625.warc.gz"}
Flow with convective acceleration through a porous medium The flow streaming into a porous and permeable medium with arbitrary but smooth wall surface is considered on the basis of the Euler equation (in the pure fluid region) and a generalized Darcy's law in which the convective acceleration is taken into account. The asymptotic behavior of the flow for small permeability of the medium is investigated. It is shown that the flow in the porous medium is irrotational except in the boundary layer next to the surface. The velocity distribution in the boundary layer is given in a universal form. Proper boundary conditions connecting the potential flow in the pure fluid region and the potential flow in the porous medium are obtained when the boundary layer is neglected. Journal of Engineering Mathematics Pub Date: January 1976 □ Acceleration (Physics); □ Convective Flow; □ Porous Walls; □ Potential Flow; □ Boundary Conditions; □ Boundary Layer Equations; □ Flow Velocity; □ Reynolds Number; □ Velocity Distribution; □ Vortices; □ Fluid Mechanics and Heat Transfer; □ Boundary Condition; □ Permeability; □ Boundary Layer; □ Porous Medium; □ Asymptotic Behavior
{"url":"https://ui.adsabs.harvard.edu/abs/1976JEnMa..10...41Y/abstract","timestamp":"2024-11-05T23:46:01Z","content_type":"text/html","content_length":"37714","record_id":"<urn:uuid:6bd1a80f-b3af-4745-961a-a1525fe7ce8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00696.warc.gz"}
Employer National Insurance Calculator 2023/24 | iCalculator™ The Employer National Insurance Contributions Calculator is updated for the 2024/25 tax year so that you can calculate your employer NIC's due to HMRC in addition to standard payroll costs. This is a simple tool that provides emlploee NI and employers ni calculations withour the Employment Allowance factored in. If you are an employer looking to calculate the employer NI on several employees and/ or include theEmployment Allowance in 2024 you will find this calculator useful. 1. Enter annual salary. [iCalculator generates a National Insurance Calculation Link] 2. Click on the link to see a full National Insurance Calculation Employer NIC Calculator Enter your employees' annual salary to view a full National Insurance Contributions calculation and Graphical Analysis for Employee and Employer NIC's More great Salary and Tax Calculators by iCalculator The following good calculators by iCalculator were used and recommended by others who used the Employer NIC's Calculator: Employer and Director Calculators Payroll and Salary Calculators About the National Insurance Calculator: The Employers National Insurance Contributions Calculator is configured to calculate National Insurance Contributions calculations for the 2023/24 tax year. Each National Insurance Contributions calculation provides a full breakdown of Employee and Employer NIC's, so that you have a full picture of exactly what your employee cost. Staff costs are not just salaries, Employers National Insurance Contributions are applied in addition to an employee's salary and are payable each month. The Employers National Insurance Contributions calculation includes a graphical overview detailing the percentage breakdown so you can view the breakdown of each staff overhead and see the true cost of maintaining your Permanent Employees. At iCalculator, we love making tax calculators simple. As an employer we understand that it can be tedious, confusing and often frustrating to calculate National Insurance Contributions for yourself as an employer and also for your employees. HMRC's new payroll software has made calculating Employer National Insurance Contributions easier in recent years but it can be hard work and doesn't help if you are looking for the true cost of a new permanent Full Time Employee (FTE) for your business. Employees National Insurance Contributions are paid on top of an employees annual gross salary so the true cost of a new staff member includes much more than just their salary (we won't go into recruitment fees here as how you recruit and the costs of recruiting will vary depending on your business model and how you employ, Head Hunter or friends from the pub! Employers National Insurance Contributions Calculations Click on one of the following pre-defined Employers National Insurance Contributions examples or generate your own bespoke graphical overview using the Employers National Insurance Contribution Generator above. Employer NIC Calculations
{"url":"https://www.icalculator.com/employer-NIC-calculator.html","timestamp":"2024-11-13T16:50:39Z","content_type":"text/html","content_length":"57838","record_id":"<urn:uuid:c58889f4-479e-4ec7-a5ff-c158c09d1bd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00336.warc.gz"}
Problem Solving McCree, Marilyn District 12 The student will be able to solve word problems using addition, subtraction, multiplication or division. Equipment and Materials A play Grocery Store Pointer Pencils Money Paper Ruler Magazines Cash Register Scissors Newspaper Ads Recommended Strategy: Discuss with your class when we solve problems in real life. We must decide what information we need in order to answer the question that is asked. Frequently we have a lot of information that we do not need and sometimes we do not have the information that we do need. Discuss different things we see and buy when we go to the grocery store. Set up a real grocery store in your classroom, attach prices to them. Have a student select three or four items for purchase. Have student give the total price of all items. Ask student how much change they would receive from $5. Ask student how much change they would receive from $10.I would extend the activity described above and ask student what they would buy for $5. Could they buy one of each item? two or more items? Have the students use grocery ads from the newspapers. Specify an amount of money they might have, say $20. Check several items and have them compute the total price and determine how much money they would have left. The student should make up word problems to exchange with other students. Give students card with number "Headlines" on them and have students make up word problems to go with them. Ex. (75 + 65 + 35 = _____). 1. Read the problem carefully. 2. Picture in your mind what is happening. 3. What information is given. 4. What is the question? 5. Decide what arithmetic you will use. Do the arithmetic. 6. Answer the question. Return to Mathematics Index
{"url":"https://smileprogram.info/ma8913.html","timestamp":"2024-11-15T00:23:58Z","content_type":"text/html","content_length":"2607","record_id":"<urn:uuid:a36cd0f0-7beb-4400-84e2-dbfcb61e9f71>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00131.warc.gz"}
Collection of Solved Problems Cold Sheet Task number: 1276 Estimate how many degrees Celsius can we reduce from the temperature of a sick child by wrapping him or her in a wet sheet. Note: Given that human body consists mainly of water, consider that its specific heat capacity has the same value as water. Assume that 30 % of the heat the child “gives off” is lost to the surroundings. Also neglect the metabolism, i.e. the chemical reactions during which the heat is released, and which maintains the body temperature. Estimate or measure the necessary data. • Solution It is a relatively simple task which uses the calorimetric formula. We know that the water in the sheet absorbs only 70 % of the released heat. We include the specific heat capacity of the sheet in the losses to the surroundings, i.e. we consider only the heat exchange between the child and water. We get an equation: \[0.7 Q_{\mathrm{released}} = Q_{\mathrm{absorbed}}\] \[0.7 c m_c (t_c - t) = c m_w (t - t_w),\] where m[c] and t[c] are the mass and the temperature of the child, m[w] and t[w] are the mass and the temperature of water, and t is the resulting temperature. We consider the specific heat capacity of water and the child to be the same. Thus we can reduce: \[0.7 m_c (t_c - t) = m_w (t - t_w).\] We can evaluate the unknown t from this equation. First we multiply out the parentheses. : \[0.7 m_c t_c - 0.7 m_c t = m_w t - m_w t_w,\] Then we transfer the terms containing the unknown t to the right side of the equation: \[ 0.7 m_c t + m_w t = 0.7 m_c t_c + m_w t_w\] Now we factor out and evaluate t: \[ (0.7 m_c + m_w) t = 0.7 m_c t_c + m_w t_w\] \[ t =\frac{ 0.7 m_c t_c + m_w t_w}{0.7 m_c + m_w}\] We substitute the following values to the resulting relation: – the mass of the child m[c] = 30 kg and his or her temperature t[c] = 40 °C – the temperature of the sheet soaked with water t[w] = 15 °C – the amount of the water kept in the sheet m[w] = 1.5 kg \[ t =\frac{ 0.7 \cdot{30} \cdot{40} + 1.5 \cdot{15}}{0.7 \cdot{30} + 1.5}\,\mathrm{^\circ C} = 38\,\mathrm{^\circ C}\] • Answer This simple task shows that in terms of heat exchange a wet wrap may actually significantly reduce the body temperature of the patient.
{"url":"https://physicstasks.eu/1276/cold-sheet","timestamp":"2024-11-11T17:59:57Z","content_type":"text/html","content_length":"27146","record_id":"<urn:uuid:de5e2f6d-c792-4212-b6d4-adf2a354dd7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00037.warc.gz"}
How to Find Mode: The Complete Guide Hello Challenger, welcome to our complete guide to finding mode. Whether you are a student, researcher, or data analyst, understanding how to find mode is fundamental in statistical analysis. This article will provide you with a step-by-step guide on how to find mode and answer some of the most frequently asked questions in this topic. Mode is a statistical measure that represents the most common value or values in a dataset. In other words, it is the value that occurs most frequently. Finding the mode of a set of data can help you describe the central tendency of the dataset and can be useful in making decisions such as determining the average income of a population or identifying trends in consumer behavior. In this guide, we will present a comprehensive explanation of how to find mode. We will explain the concept of mode, how to calculate mode manually and using statistical software, and how to interpret mode-based What is Mode? Mode is simply the value that occurs most frequently in a dataset. How to Calculate Mode Manually? To calculate mode manually, you must first organize your data in ascending or descending order. You will then count the frequency of each value and identify the value with the highest frequency. In cases where two or more values have the same frequency, there are multiple modes. How to Find Mode in Statistical Software? Statistical software such as Excel, SPSS, and R have built-in functions that can automatically calculate mode. For example, in Excel, you can use the MODE function to find mode. When is Mode Useful? Mode is useful when identifying the most common value in a dataset. It can help understand the central tendency of the data and identify potential trends. How to Interpret Mode-based Decisions? Mode-based decisions should be made with caution. Although mode is a useful measure of central tendency, it does not provide a complete picture of the dataset. Other measures such as mean and median should also be considered before making any decisions. How to Find Mode for Frequency Distribution? To find the mode for a frequency distribution, you must identify the class interval with the highest frequency. The midpoint of that interval is considered the mode. How to Interpret Unimodal and Bimodal Datasets? Unimodal datasets have one mode while bimodal datasets have two modes. Bimodal datasets are rare in data analysis, but when they occur, they can indicate the presence of two distinct sub-groups within the dataset. Methods to Find Mode in Numerical Data There are different methods to find mode in numerical data. Here are four methods: Method 1: Manual Calculation This method involves sorting the data and then counting the numbers of occurrences of each value. The value with the highest frequency is the mode. Method 2: Statistical Software Statistical software such as Excel, SPSS, and R have built-in functions that can automatically calculate mode. Method 3: By Grouping or Class Intervals When dealing with large datasets, you may group or bin the data into intervals. In such cases, you can identify the interval with the highest frequency and use the midpoint of the interval as the Method 4: Using Graphical Representations Histograms and frequency polygons are graphical representations that can help identify the mode in a dataset. The peak of the histogram or frequency polygon indicates the mode. Methods to Find Mode in Categorical Data In categorical data, the mode represents the most frequent category. Here are three methods to find mode in categorical data: Method 1: Manual Calculation This method involves sorting the data and then counting the occurrences of each category. The category with the highest frequency is the mode. Method 2: Statistical Software Statistical software such as Excel, SPSS, and R have built-in functions that can automatically calculate mode for categorical data. Method 3: Using Bar Charts or Pie Charts Bar charts and pie charts are graphical representations that can help identify the mode in categorical data. The category with the largest bar or the largest slice is the mode. Table of How to Find Mode Methods Formula/Description Method 1: Manual Calculation Sort data & count frequency Method 2: Statistical Software Use built-in functions Method 3: By Grouping or Class Intervals Identify interval with highest frequency Method 4: Using Graphical Representations Create histograms or frequency polygons Method 1: Manual Calculation Sort data & count frequency Method 2: Statistical Software Use built-in functions Method 3: Using Bar Charts or Pie Charts Create bar charts or pie charts 1. What is the difference between mode and median? Mode is the most commonly occurring value in a dataset while the median is the middle value of a dataset when arranged in ascending or descending order. 2. Can datasets have more than one mode? Yes, datasets can have multiple modes when two or more values have the same frequency. 3. What is the difference between mode and mean? Mode is the most commonly occurring value in a dataset while the mean is the average value of all the data points in a dataset. 4. Can mode be used for all types of data? Yes, mode can be used for both numerical and categorical data. 5. What is modal class? Modal class is the class interval with the highest frequency in a frequency distribution. 6. How is mode useful in decision-making? Mode can be useful in identifying the most common value in a dataset, which can help make informed decisions. 7. Is mode affected by outliers? No, mode is not affected by outliers. 8. Can mode be used as a measure of central tendency in all cases? No, mode should be used with caution as it does not provide a complete picture of the dataset. 9. What is the modal value of 2, 2, 3, 4, 5, 5, 5, 6? The modal value of this dataset is 5 since it has the highest frequency. 10. What is the mode of a discrete probability distribution? The mode of a discrete probability distribution is the value with the highest probability. 11. Can mode be used to determine the spread of a dataset? No, mode does not provide information about the spread of a dataset. 12. How is mode useful in identifying trends? Mode can indicate the most common value in a dataset, which can help identify trends. 13. What is a bimodal dataset? A bimodal dataset is a dataset with two distinct modes. In conclusion, finding mode is an essential skill in statistical analysis. It can provide valuable insights into a dataset’s central tendency and help identify trends. In this guide, we have covered how to find mode manually, using statistical software, and through graphical representations for both numerical and categorical data. Although it is a useful measure of central tendency, mode should be used with caution as it does not provide a comprehensive picture of the dataset. We hope this guide has helped you understand how to find mode better and apply it to your data analysis. Closing Statement with Disclaimer This article is for informational purposes only and should not be used as a substitute for professional advice. The author and publisher do not warrant the information presented herein to be accurate or complete, and they will not be responsible for any errors or omissions or for the results obtained from the use of such information. You should consult with a professional advisor for individual advice concerning your specific situation.
{"url":"https://www.iykoongchallenge.com/how-to-find-mode","timestamp":"2024-11-14T21:42:25Z","content_type":"text/html","content_length":"67476","record_id":"<urn:uuid:b6b08875-f42c-4234-bdf1-709e579b94d5>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00480.warc.gz"}
TOBIT procedure • Genstat Knowledge Base 2024 Performs a Tobit linear mixed model analysis on data with fixed-threshold censoring (M.C. Hannah & V.M. Cave). PRINT = string Controls printed output (summary); default summ VPRINT = string Controls printed output from the REML analysis of the data with censored observations replaced by their estimates (model, components, effects, means, stratumvariances, monitoring, tokens vcovariance, deviance, Waldtests, missingvalues); default mode, comp, Wald PSE = string Standard errors to be printed with tables of effects and means from the REML analysis (differences, estimates, alldifferences, allestimates, none); default diff PLOT = string To display a scatter plot of the data with censored observations replaced by their estimates against the observed data(scatterplot); default * MAXCYCLE = scalar Sets a limit on the number of iterations performed by the E-M algorithm; default 30 TOLERANCE = Sets tolerance limits for convergence of the E-M algorithm on the treatment means and the variance components; default 0.1 and 0.05 for the treatment means and variance components, variate respectively RMETHOD = string Which random terms to use when calculating the residuals during the E-step of the E-M algorithm (final, all); default final DIRECTION = The direction of the censoring (left, right); default left (i.e., the true values for the censored observations are less than or equal to the BOUND) string token Y = variate Response variate to be analysed; no default, must be set BOUND = scalar Censoring threshold; no default, must be set CENSORED = variate Indicator variable for censored observations, with values of one where the response values are censored and zero otherwise INITIAL = scalar or variate Scalar or a variate providing starting values for the censored observations in the E-M algorithm NEWY = variate Saves a copy of the response variate with the censored observations replaced by their estimates YCENSORED = variate Saves a logical variate indicating which Y values are censored SAVE = REML save structure REML save structure from the analysis of the data with censored observations replaced by their estimates The TOBIT procedure performs a linear mixed model analysis on data values that are subject to fixed threshold censoring. Such censoring occurs when a measurement cannot be taken above or below a bound. For example, chemical concentrations may be censored when they fall below a minimum level of quantification. The procedure uses an E-M algorithm to estimate values for the censored observations, and once converged, uses REML to analyse the response variate with the censored observations replaced by their estimates. TOBIT must be preceded by a VCOMPONENTS command to define the fixed and random models. (Note, however, that TOBIT does not accommodate spline terms in VCOMPONENTS, nor linear mixed models with complex covariance structures defined by VSTRUCTURE.) The response variate must be supplied using the Y parameter, and a scalar defining the fixed censoring threshold must be supplied using the BOUND parameter. By default, the data values are assumed to be left-censored (i.e., measurements less than or equal to the value specified by the BOUND parameter are censored). However, right-censoring (i.e., when measurements greater than or equal to the BOUND are censored) can be specified by setting the DIRECTION option to right. Censored observations in Y may be represented either by missing values or by values at or outside the BOUND (i.e., for left-censoring, y-values ≤ BOUND, or, for right-censoring, y-values ≥ BOUND). If missing values are used, an indicator variate, with values of one corresponding to censored observations and values of zero to the non-censored observations, must be supplied using the CENSORED parameter. The MAXCYCLE, TOLERANCE and RMETHOD options, the INITIAL parameter and the VAOPTIONS procedure can be used to control various aspects of the E-M algorithm performed by TOBIT. The INITIAL parameter provides starting values for the estimates of the censored observations. If available, these may speed up convergence of the E-M algorithm. The values should be below the value specified by the BOUND parameter when DIRECTION=left, or above that value when DIRECTION=right. INITIAL can supply a scalar if a common starting value is to be used for all the censored observations. Alternatively, if different values are required, INITIAL should supply a variate of the same length as Y. Only the values corresponding to censored observations are used, the others are ignored. If INITIAL if not specified, the default is to use the value specified by the BOUND parameter. The MAXCYCLE option specifies the maximum number of iterations performed by the E-M algorithm (default 30). By default, the E-M algorithm is deemed to have converged if the percentage change in each estimated treatment mean is less than 0.1%, and the percentage change in each estimated variance component is less than 0.05%. However, you can change these tolerance limits by setting the TOLERANCE option to a variate of length two. Its first value specifies the maximum acceptable percentage change for the treatment means, and its second value specifies the maximum acceptable percentage change for the variance components. The RMETHOD specifies which random terms are used when estimating values for the censored observations during the E-step of the E-M algorithm. With RMETHOD=all, the censored observations are estimated from the fixed effects only, whereas when RMETHOD=final, the censored observations are estimated from the fixed and random effects; default final. Finally, the VAOPTIONS procedure can be used to specify the MAXCYCLE and WORKSPACE options of the REML commands used during the M-step of the E-M algorithm. Printed output is controlled by the PRINT, VPRINT, and PSE options. The PRINT option has one setting, summary, which prints information on the number of E-M algorithm iterations performed, the percentage of observations censored and the censoring threshold. This is the default, but you can suppress this output by setting option PRINT=*. The VPRINT and PSE options control the printed output from the REML analysis when the censored observations have been replaced by their estimates. The VPRINT option has the same settings as the PRINT option of the REML directive, other than that covariancemodels is excluded; the default is PRINT=model,comp,Wald. Similarly, the setting of PSE are the same as those of the PSE option of the REML directive; the default is PSE=diff. You can set option PLOT=scatterplot to display a scatter plot of the data, plotting the new y-variate, with censored observations replaced by their estimates, against the observed response variate. When censored observations in Y are entered as missing values, they are plotted at the value specified by the BOUND parameter; otherwise, they are plotted at the values given in Y. Superimposed onto this plot are a 1-1 line and a horizontal reference line at the censoring threshold defined by the BOUND parameter. By default, no plot is produced. The NEWY parameter allows you to save a copy of the response variate with the censored observations replaced by their estimates. An indicator variable with values of one corresponding to censored observations in Y and values of zero to non-censored observations can be saved using the YCENSORED parameter. Note, this will be equivalent to any variate supplied by CENSORED. The SAVE parameter can be used to save the save structure from the REML analysis of the data with censored observations replaced by their estimates, for later use by other REML directives and procedures, such as VDISPLAY and VGRAPH. Options: PRINT, VPRINT, PSE, PLOT, MAXCYCLE, TOLERANCE, RMETHOD, DIRECTION Parameters: Y, BOUND, CENSORED, INITIAL, NEWY, YCENSORED, SAVE The E-M (expectation-maximization) algorithm is an iterative two step method to optimize a model. The initial expectation step uses the initial values (either INITIAL, if given, or BOUND) for the censored observations. In the maximization step, the current estimates of the censored values are used in the y-variate in a standard REML analysis to estimate the fitted values and their variances. In subsequent expectation steps, the censored values are estimated as the expected value of the tail of Normal distribution with means and variances for these observations from previous M-step model. The expected deviate in the lower tail of a Normal distribution (x < BOUND) is m - SQRT(v)*PRNORMAL(BOUND;m;v)/CLNORMAL(BOUND;m;v). Restrictions are not allowed. Amemiya, T. (1984). Tobit models: A survey. Journal of Econometrics, 24, 3-61. Dempster, A.P., Laird, N.M. & Rubin, D.B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39, 1-38. Taylor, J. (1973). The analysis of designed experiments with censored observations. Biometrics, 29, 35-43. Tobin, J. (1958). Estimation of relationships for limited dependent variables. Econometrica, 26, 24-36. See also Directives: REML, VCOMPONENTS, VDISPLAY, VKEEP. Procedures: CENSOR, GLTOBITPOISSON, HGTOBITPOISSON, RTOBITPOISSON, VAOPTIONS. Commands for: REML analysis of linear mixed models. CAPTION 'TOBIT example',\ !T('Oats.gsh contains yield data from a split-plot experiment.',\ 'For details, see section 5.1 in',\ 'A Guide to Anova and Design in Genstat');\ SPLOAD [PRINT=summary] '%Data%/Oats.gsh' CAPTION 'Example 1: Yield left-censored data at 70.'; STYLE=meta CALCULATE yield_lc = yield*(yield .GT. 70) + 70*(yield .LE. 70) VCOMPONENTS [FIXED=nitrogen*variety] RANDOM=blocks/wplots/subplots TOBIT [PLOT=scatterplot] Y=yield_lc; BOUND=70 CAPTION 'Example 2: Yield right-censored data at 130.'; STYLE=meta CALCULATE yield_rc = yield*(yield .LT. 130) + 130*(yield .GE. 130) VCOMPONENTS [FIXED=nitrogen*variety] RANDOM=blocks/wplots/subplots TOBIT [DIRECTION=right; PLOT=scatterplot] Y=yield_rc; BOUND=130 CAPTION !T('Example 3: Yield right-censored data at 130,', \ 'recorded as missing values.'); STYLE=meta CALCULATE yield_rc0 = REPLACE(yield_rc; 130; !s(*)) CALCULATE censored = yield_rc0 == !s(*) TOBIT [DIRECTION=right; PLOT=scatterplot] Y=yield_rc0; BOUND=130; \ CAPTION 'Example 4: Saving structures to produce extra output.'; STYLE=meta TOBIT [VPRINT=*] Y=yield_lc; BOUND=70; NEWY=newy; YCENSORED=ycen; SAVE=vsave VPLOT [SAVE=vsave; RMETHOD=final] METHOD=fitted; PEN=ycen+1 VPLOT [SAVE=vsave; RMETHOD=all] METHOD=fitted; PEN=ycen+1 FACTOR [LEVELS=!(0,1); LABELS=!T(uncensored,censored)] ycenF; VALUES=ycen DOTHISTOGRAM [KEYDESCRIPTION=''] newy; PENS=ycenF VKEEP [FITTED=fit; SAVE=vsave] DOTHISTOGRAM [KEYDESCRIPTION=''] fit; PENS=ycenF
{"url":"https://genstat.kb.vsni.co.uk/knowledge-base/tobit/","timestamp":"2024-11-06T08:22:05Z","content_type":"text/html","content_length":"52971","record_id":"<urn:uuid:8ff865a2-d366-408f-9351-c556bb3a900e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00136.warc.gz"}
Connect the Dots - Galileo Educational Network Connect the Dots Category: Variables & Equations, Relations & Functions, Patterns, and 3D Objects & 2D Shapes Suitable for Grade Level: Secondary and Elementary The Math in this Problem: In this puzzle, students will gain an understanding of circuit boards and their well-designed structure to prevent the use, and its messy arrangement, of wires. Given certain constraints, puzzle-solvers will need to figure out a way to structure a circuit board so that connections can be made successfully. Circuit boards are made by imprinting a conducting metal like copper on a non-conducting material like fibreglass. If it were not for circuit boards, connections could be made by wires, but this can get very messy! Design a circuit board that connects opposite pairs of points on this circular disk. The problem is that no connectors may overlap and no more than two connectors may go “around the back” of the The circuit board on the right successfully connects opposite points, but 3 connectors go around the back (red) of the lowest point so this circuit board will melt. • How many different solutions can you find? • Prove that it is imposible for a pair of points to be connected directly across the circular disk (as is shown in the melting example above). • Solve for 18 points where only 3 connectors may go “around the back” of a point. • Circuit boards may be printed on both sides with the dots going through from one side of the board to the other. Create your own problem using two-sided printing.
{"url":"https://galileo.org/math-fair-problem/connect-the-dots/","timestamp":"2024-11-05T00:41:06Z","content_type":"text/html","content_length":"75289","record_id":"<urn:uuid:465fe250-a35b-4466-af03-852ab46b6278>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00212.warc.gz"}
CN107153149B - Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current characteristic - Google Patents CN107153149B - Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current characteristic - Google Patents Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current characteristic Download PDF Publication number CN107153149B CN107153149B CN201710331464.4A CN201710331464A CN107153149B CN 107153149 B CN107153149 B CN 107153149B CN 201710331464 A CN201710331464 A CN 201710331464A CN 107153149 B CN107153149 B CN 107153149B Prior art keywords Prior art date Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.) Application number Other languages Other versions Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.) XI'AN XIRUI CONTROL TECHNOLOGY Co.,Ltd. Original Assignee Xian Jiaotong University Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.) Filing date Publication date Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University Priority to CN201710331464.4A priority Critical patent/CN107153149B/en Publication of CN107153149A publication Critical patent/CN107153149A/en Application granted granted Critical Publication of CN107153149B publication Critical patent/CN107153149B/en Active legal-status Critical Current Anticipated expiration legal-status Critical ☆ G—PHYSICS ☆ G01—MEASURING; TESTING ☆ G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES ☆ G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere ☆ G01R31/08—Locating faults in cables, transmission lines, or networks ☆ G01R31/081—Locating faults in cables, transmission lines, or networks according to type of conductors ☆ G01R31/086—Locating faults in cables, transmission lines, or networks according to type of conductors in power transmission or distribution networks, i.e. with interconnected conductors ☆ G—PHYSICS ☆ G01—MEASURING; TESTING ☆ G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES ☆ G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere ☆ G01R31/08—Locating faults in cables, transmission lines, or networks ☆ G01R31/088—Aspects of digital computing □ Physics & Mathematics (AREA) □ General Physics & Mathematics (AREA) □ Engineering & Computer Science (AREA) □ Mathematical Physics (AREA) □ Theoretical Computer Science (AREA) □ Testing Of Short-Circuits, Discontinuities, Leakage, Or Incorrect Line Connections (AREA) □ Locating Faults (AREA) The present invention discloses a kind of power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current characteristic, comprising: step 1: in the three-phase voltage of substation's acquisition bus and the three-phase current of each route;Step 2: extracting bus negative sequence voltage and each route negative-sequence current;Step 3: seeking the derivative of each outlet negative-sequence current;Step 4: seeking the related coefficient of bus negative sequence voltage Yu each outlet negative-sequence current ;Step 5: the related coefficient of more each route, if related coefficient is greater than 0, for sound circuit;If related coefficient is less than 0, for faulty line.The present invention is based on the power distribution network single-phase disconnection recognition methods of negative sequence voltage and negative-sequence current derivative correlation method to have bootstrapping property, is not influenced by neutral grounding in distribution power network, can be with all kinds of disconnection faults of reliable recognition. Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current characteristic Technical field The present invention relates to power distribution network relay protection field, in particular to a kind of identification side of power distribution network single-phase disconnection failure Background technique With the propulsion of distribution network construction, insulating frame ceases to be busy is using increasing.With the raising of line insulation rate, thunder There are also the other reasons such as construction for the reason of it is more and more to hit disconnection fault, while causing disconnection fault.Power distribution network occurs single-phase disconnected The normal operation of power distribution network is not influenced after line, so being often difficult to find this kind of failure.But if disconnection fault cannot be timely Processed, it will cause the accidents such as the electric shock of people and animals around, and it is therefore necessary to study power distribution network broken string identification technology. Summary of the invention The purpose of the present invention is to provide a kind of power distribution network single-phase disconnection fault identifications based on negative sequence voltage current characteristic Method, to solve the above technical problems. To achieve the goals above, the present invention adopts the following technical scheme: Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current characteristic, comprising the following steps: Step 1: in the three-phase voltage of substation's acquisition bus and the three-phase current of each route; Step 2: extracting bus negative sequence voltage and each route negative-sequence current; Step 3: seeking the derivative of each outlet negative-sequence current; Step 4: seeking the related coefficient of bus negative sequence voltage Yu each outlet negative-sequence current; Step 5: the related coefficient of more each route, if related coefficient is greater than 0, for sound circuit;If related Coefficient is then faulty line less than 0. Further, bus negative sequence voltage and each route negative-sequence current are extracted by formula (3) in step 2: Wherein u[0](k)、i[0](k) it is the sample amplitude when reproduced and electric current of residual voltage and zero-sequence current, passes through three-phase voltage and three Phase current synthesizes to obtain, and N is the sampling number of every frequency cycle. Further, step 3 seeks the derivative of each outlet negative-sequence current by formula (4): Further, step 4 seeks the related coefficient of bus negative sequence voltage Yu each outlet negative-sequence current by formula (5): Compared with the existing technology, the invention has the following advantages: the present invention initially set up power distribution network occur it is single-phase Model after disconnection fault, negative sequence voltage and the electric current for being then based on Analysis of Failure Model sound circuit and faulty line are special Sign, and propose the recognition methods of faulty line.The present invention is based on the power distribution networks of negative sequence voltage and negative-sequence current derivative correlation method Single-phase wire break recognition methods has bootstrapping property, is not influenced by neutral grounding in distribution power network, can be with all kinds of broken strings of reliable recognition Failure. Detailed description of the invention Fig. 1 is that the negative phase-sequence equivalent network schematic diagram after single-phase wire break occurs for power distribution network; Fig. 2 is 10kV power distribution network simulation model schematic diagram; Fig. 3 is that negative sequence voltage and negative-sequence current derivative related coefficient after single-phase wire break failure occur for isolated neutral system; Fig. 4 be arc suppression coil earthing system occur single-phase wire break failure after negative sequence voltage and negative-sequence current derivative phase relation Number. Specific embodiment Present invention seek to address that single-phase wire break fault identification problem occurs for power distribution network.After pointing out that single-phase wire break occurs for power distribution network Negative phase-sequence equivalent network as shown in Figure 1, wherein Z[in]Indicate the equivalent negative phase-sequence of i-th outlet route, load transformer and load Reactance, i[in]Indicate the negative-sequence current of i-th outlet, Z[mdn]、i[mdn]Indicate the equivalent negative sequence neactance and negative phase-sequence electricity in faulty line downstream Stream, Z[mun]、i[mun]Indicate the equivalent negative sequence neactance and negative-sequence current of faulty line upstream, Z[Gn]、i[Gn]Indicate the equivalent negative of system power supply Sequence reactance and negative-sequence current, Z[Ln]、i[Ln]Indicate the equivalent negative sequence neactance and negative-sequence current of arc suppression coil, i[n]Indicate that incision position injects net The equivalent negative phase-sequence current source of network. As can be seen from Figure 1 bus is flowed to by route for faulty line negative-sequence current, for sound circuit be by Bus flows to route because by route, load transformer and load it is equivalent at a negative sequence neactance, for perfecting line Road meets: Faulty line is met: It is positively correlated from formula (1), (2) it can be seen that meeting for the derivative of sound circuit negative sequence voltage and negative-sequence current, therefore The derivative of the negative sequence voltage and negative-sequence current that hinder route meets negatively correlated.The differentiation of faulty line can be realized accordingly, specifically Step are as follows: Step 1: in the three-phase voltage (u of substation's acquisition bus[a](k), u[b](k), u[c]And the three-phase current of each route (k)) (i[a](k), i[b](k), i[c](k)); Step 2: by formula (3), extracting bus negative sequence voltage u[n](k) and each route negative-sequence current i[n](k); Wherein u[0](k)、i[0](k) it is the sample amplitude when reproduced and electric current of residual voltage and zero-sequence current, passes through three-phase voltage (u[a] (k), u[b](k), u[c] And three-phase current (i (k))[a](k), i[b](k), i[c](k)) synthesis obtains, and N is the sampling number of every frequency cycle. Step 3: the derivative of each outlet negative-sequence current is sought by formula (4); Wherein, Δ T is sampling step length; Step 4: the related coefficient of bus negative sequence voltage Yu each outlet negative-sequence current is sought by formula (5); Step 5: the related coefficient of more each route, if related coefficient is greater than 0, for sound circuit;If related Coefficient is then faulty line less than 0. Simulating, verifying: Fig. 2 is the 10kV power distribution network simulation model schematic diagram established based on PSCAD;In the model, 35kV substation has two It is single busbar form by the 10kV system that two main transformers are allotted back into line;Bus has 4 main feeders, each in outlet The number of section is as shown in the figure.Wherein, section 1,3,5,10 is cable, and section 2,9,11,12,13 is aerial insulated wire, area Section 4,6,7,8,14 is overhead bare conductor.Arc suppression coil is on change neutral point used.When switch K is opened, system is neutral point Isolated neutral system;Switch K closure is then arc suppression coil earthing system, and overcompensation degree is taken as 10%. Each section length is respectively as follows: L[1]=5.1km, L[2]=4km, L[3]=3.8km, L[4]=7.5km, L[5]=4km, L[6]= 10km, L[7]=0.1km, L[8]=3km, L[9]=4km, L[10]=3.2km, L[11]=10km, L[12]=5km, L[13]=3km, Cable data are as follows: positive sequence resistance r[1]=0.157 Ω/km, positive sequence induction reactance x[1]=0.076 Ω/km, positive sequence accommodate b[1]= 132×10^-6S/km;Zero sequence resistance r[0]=0.307 Ω/km, zero sequence induction reactance x[0]=0.304 Ω/km, zero sequence accommodate b[0]=110 × 10^-6S/km。 Aerial insulated wire parameter are as follows: r[1]=0.27 Ω/km, positive sequence induction reactance x[1]=0.352 Ω/km, positive sequence accommodate b[1]= 3.178×10^-6S/km;Zero sequence resistance r[0]=0.42 Ω/km, zero sequence induction reactance x[0]=3.618 Ω/km, zero sequence accommodate b[0]=0.676 ×10^-6S/km。 Bare conductor parameter in section 7,8 are as follows: positive sequence resistance r[1]=0.91 Ω/km, positive sequence induction reactance x[1]=0.403 Ω/km, just Sequence accommodates b[1]=2.729 × 10^-6S/km;Zero sequence resistance r[0]=1.06 Ω/km, zero sequence induction reactance x[0]=3.618 Ω/km, zero sequence accommodate b[0]=0.672 × 10^-6S/km。 Other section bare conductor parameters are as follows: positive sequence resistance r[1]=0.63 Ω/km, positive sequence induction reactance x[1]=0.392 Ω/km, positive sequence Accommodate b[1]= 2.807 × 10^-6S/km;Zero sequence resistance r[0]=0.78 Ω/km, zero sequence induction reactance x[0]=3.593 Ω/km, zero sequence accommodate b[0] =0.683 × 10^-6S/km。 Two main transformer parameters are respectively as follows: capacity S[N]=2MVA, short circuit loss P[k]=20.586kW, short-circuit voltage percentage U[k]%=6.37%, no-load loss P[0]=2.88kW, no-load current percentage I[0]%=0.61%;Capacity S[N]=2MVA, short circuit damage Consume P[k]=20.591kW, short-circuit voltage percentage U[k]%=6.35%, no-load loss P[0]=2.83kW, no-load current percentage Each distribution transformer and institute's jointing is enabled to number consistent, then their capacity is respectively as follows: S[5N]=50kVA, S[7N]= 500kVA, S[8N]=200kVA, S[9N]=1MVA, S [10N]=100kVA, S[12N]=1MVA, S[13N]=400kVA, S[14N]=630kVA.For For the sake of simplicity, each distribution transformer institute on-load is unified for the 80% of transformer capacity, power factor 0.85. Fig. 3 is that the waveform that single-phase wire break phase to phase fault emulates is arranged in the end of isolated neutral system section 1.It can be with Find out, the related coefficient of route 1 is negative value, other routes are positive, it is possible to which determination is that disconnection fault has occurred in route 1. Fig. 4 flanks earth fault in the head end setting single-phase wire break application of load of section 4 for arc suppression coil earthing system and emulates The waveform arrived.As can be seen that the related coefficient of route 4 is negative value, other routes are positive, it is possible to which true timing circuit 4 occurs Disconnection fault. To sum up analysis based on the power distribution network single-phase disconnection of negative sequence voltage and negative-sequence current derivative correlation method it can be seen that identified Method has bootstrapping property, is not influenced by neutral grounding in distribution power network, can be with all kinds of disconnection faults of reliable recognition. Claims (2) 1. the power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current characteristic, which is characterized in that including following step It is rapid: Step 1: in the three-phase voltage of substation's acquisition bus and the three-phase current of each route; Step 2: extracting bus negative sequence voltage and each route negative-sequence current; Step 3: seeking the derivative of each outlet negative-sequence current; Step 4: seeking the related coefficient of bus negative sequence voltage Yu each outlet negative-sequence current; Step 5: the related coefficient of more each route, if related coefficient is greater than 0, for sound circuit;If related coefficient It is then faulty line less than 0; Bus negative sequence voltage and each route negative-sequence current are extracted by formula (3) in step 2: Wherein u[0](k)、i[0](k) it is the sample amplitude when reproduced and electric current of residual voltage and zero-sequence current, passes through three-phase voltage and three-phase electricity Stream synthesis obtains, and N is the sampling number of every frequency cycle; Step 4 seeks the related coefficient of bus negative sequence voltage Yu each outlet negative-sequence current by formula (5): 2. the power distribution network single-phase disconnection fault recognition method according to claim 1 based on negative sequence voltage current characteristic, It is characterized in that, step 3 seeks the derivative of each outlet negative-sequence current by formula (4): Δ T is sampling step length. CN201710331464.4A 2017-05-11 2017-05-11 Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current characteristic Active CN107153149B (en) Priority Applications (1) Application Number Priority Date Filing Date Title CN201710331464.4A CN107153149B (en 2017-05-11 2017-05-11 Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current ) characteristic Applications Claiming Priority (1) Application Number Priority Date Filing Date Title CN201710331464.4A CN107153149B (en 2017-05-11 2017-05-11 Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current ) characteristic Publications (2) Family Applications (1) Application Number Title Priority Filing Date CN201710331464.4A Active CN107153149B ( 2017-05-11 2017-05-11 Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current en) characteristic Country Status (1) Families Citing this family (8) * Cited by examiner, † Cited by third party Publication number Priority Publication Assignee Title date date CN108054739B (en) 2017-12-18 2019-05-28 广东电网有限责任公司珠海供电局 A kind of overhead transmission line feeder automation method and system based on negative-sequence current CN109375030A (en) 2018-09-06 2019-02-22 深圳供电局有限公司 High-voltage overhead line disconnection fault identification method and device CN109596945B (en) 2019-01-18 2020-09-25 广东电网有限责任公司 Novel power distribution network fault line selection method based on correlation coefficient vector similarity * degree CN110187220B (en) 2019-05-23 2021-09-07 昆明理工大学 MMC direct current transmission line fault identification method based on correlation CN111337855B (en) 2020-03-18 2022-06-21 贵州电网有限责任公司 Active power distribution network disconnection fault protection method based on negative sequence current ratio CN111323733B (en) 2020-03-23 2021-12-07 贵州电网有限责任公司 Single-phase disconnection monitoring method based on negative sequence voltage at distributed power supply * terminal CN113422356B (en) 2021-07-08 2022-07-22 国网河北省电力有限公司电力科学研究 Active power distribution network disconnection fault protection method and device and power distribution * 院 terminal CN113848429B (en) 2021-10-15 2023-07-18 国网陕西省电力公司电力科学研究院 Single-phase disconnection fault protection method and system for power distribution network Citations (10) * Cited by examiner, † Cited by third party Publication number Priority Publication Assignee Title date date CN101162838A (en) * 2007-11-29 2008-04-16 昆明理工大学 Low current neutral grounding system fault route selecting method by wavelet package decompose and correlation analysis CN102419408A (en) * 2011-12-31 2012-04-18 上海交通大学 System and method for determining single-phase disconnection fault sections based on load monitors CN102866326A (en) * 2012-09-06 2013-01-09 国家电网公司 Distribution network fault line selection method based on zero sequence current variable quantity waveform correlation coefficient matrix CN103197203A (en) * 2013-03-29 2013-07-10 昆明理工大学 Fault line selection method based on time domain waveform correlation analysis of three-phase current breaking variable CN103743998A (en) * 2013-12-23 2014-04-23 华北电力大学(保定 Cross correlation coefficient-based distribution network single-phase grounding fault positioning method and system CN104181442A (en) * 2014-08-21 2014-12-03 西安交通大学 Power distribution network single-phase earth fault section locating method based on correlation analysis KR101598536B1 (en) 2014-11-04 2016-02-29 한전케이디엔주식회 Feeder remote terminal unit * 사 CN105842583A (en) * 2016-03-25 2016-08-10 西安交通大学 Distribution network single-phase grounding section positioning method based on fault phase voltage and current abrupt change CN105954640A (en) * 2016-05-03 2016-09-21 河南师范大学 Power distribution network fault line selection method based on dominant frequency zero sequence power CN106501668A (en) * 2016-03-16 2017-03-15 济南大学 A kind of conventional electrical distribution net single-phase wire break fault-line selecting method Patent Citations (10) * Cited by examiner, † Cited by third party Publication number Priority Publication Assignee Title date date CN101162838A (en) * 2007-11-29 2008-04-16 昆明理工大学 Low current neutral grounding system fault route selecting method by wavelet package decompose and correlation analysis CN102419408A (en) * 2011-12-31 2012-04-18 上海交通大学 System and method for determining single-phase disconnection fault sections based on load monitors CN102866326A (en) * 2012-09-06 2013-01-09 国家电网公司 Distribution network fault line selection method based on zero sequence current variable quantity waveform correlation coefficient matrix CN103197203A (en) * 2013-03-29 2013-07-10 昆明理工大学 Fault line selection method based on time domain waveform correlation analysis of three-phase current breaking variable CN103743998A (en) * 2013-12-23 2014-04-23 华北电力大学(保定 Cross correlation coefficient-based distribution network single-phase grounding fault positioning method and system CN104181442A (en) * 2014-08-21 2014-12-03 西安交通大学 Power distribution network single-phase earth fault section locating method based on correlation analysis KR101598536B1 (en) 2014-11-04 2016-02-29 한전케이디엔주식회 Feeder remote terminal unit * 사 CN106501668A (en) * 2016-03-16 2017-03-15 济南大学 A kind of conventional electrical distribution net single-phase wire break fault-line selecting method CN105842583A (en) * 2016-03-25 2016-08-10 西安交通大学 Distribution network single-phase grounding section positioning method based on fault phase voltage and current abrupt change CN105954640A (en) * 2016-05-03 2016-09-21 河南师范大学 Power distribution network fault line selection method based on dominant frequency zero sequence power Similar Documents Publication Publication Date Title CN107153149B (en) Power distribution network single-phase disconnection fault recognition method based on negative sequence voltage current characteristic CN109669103B (en) Real type power distribution network multi-state complex fault simulation test platform and test method CN105842583B (en) Distribution single-phase earthing Section Location based on faulted phase voltage and jump-value of current CN105119255B (en) Photovoltaic microgrid fault isolation method based on fault state CN107192922B (en) Resonant earthed system Earth design method based on phase current phase bit comparison CN109494696B (en) Power distribution network asymmetric fault positioning and isolating method and system based on adaptive reclosing CN106199330B (en) A kind of marine wind electric field collection line fault positioning system and method CN107219442B (en) Resonant earthed system Earth design method based on phase voltage current phase CN104360227B (en) Substation cable outlet fault monitoring method based on traveling wave method and transient basic frequency method CN106997016B (en) A kind of low-voltage distributing line disconnection fault recognition methods and device CN104360226B (en) Method for monitoring fault of cable outgoing lines of transformer substation on basis of current initial traveling wave polarity CN107015113A (en) The power distribution network broken string recognition methods compared is mutated based on forward-order current CN107015114A (en) The broken string recognition methods compared based on non-faulting phase current correlation CN207541193U (en) A kind of polymorphic complex fault analogue test platform of power distribution network CN104375056B (en) Substation cable outgoing line fault monitoring method based on voltage and current initial row waves CN102255274A (en) Direct-current ice melting method for overhead ground wire and composite optical fiber ground wire CN106980067B (en) The broken string recognition methods compared based on residual voltage differential values CN113848429B (en) Single-phase disconnection fault protection method and system for power distribution network CN106202811A (en) A kind of multi-source heterogeneous distribution network failure emulation mode based on RTDS CN112986743B (en) Active intervention type arc suppression device test system function system CN112540259A (en) Distribution network disconnection fault identification method and system suitable for intelligent power distribution terminal CN109001592A (en) A kind of resonant earthed system fault line selection method for single-phase-to-ground fault based on transient CN105445567B (en) The nuclear-phase method of totally enclosed type generalized information system CN102856869A (en) Wiring method for realizing direct-current deicing of ground wire of converter station CN107255765B (en) A kind of resonant earthed system singlephase earth fault Section Location Legal Events Date Code Title Description PB01 Publication PB01 Publication SE01 Entry into force of request for substantive SE01 Entry into force of request for substantive GR01 Patent grant GR01 Patent grant TR01 Transfer of patent right Effective date of registration: 20201231 Address after: Room D301, gazelle Valley, No.1, Zone C, venture R & D Park, 69 Jinye Road, high tech Zone, Xi'an City, Shaanxi Province, 710077 TR01 Transfer of patent right Patentee after: XI'AN XIRUI CONTROL TECHNOLOGY Co.,Ltd. Address before: Beilin District Xianning West Road 710049, Shaanxi city of Xi'an province No. 28 Patentee before: XI'AN JIAOTONG University
{"url":"https://patents.google.com/patent/CN107153149B/en","timestamp":"2024-11-14T08:00:34Z","content_type":"text/html","content_length":"90992","record_id":"<urn:uuid:3b329292-a19f-43bc-9e48-77ec8c84d5a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00287.warc.gz"}
Robert Solovay According to our database , Robert Solovay authored at least 16 papers between 1975 and 2017. Collaborative distances: Book In proceedings Article PhD thesis Dataset Other Online presence: On csauthors.net: Strong measure zero and infinite games. Arch. Math. Log., 2017 Some new results on decidability for elementary algebra and geometry. Ann. Pure Appl. Log., 2012 A Version of Ω for which ZFC Cannot Predict a Single Bit. Proceedings of the Finite Versus Infinite, 2000 Extremes in the Degrees of Inferability. Ann. Pure Appl. Log., 1994 Learning vi Queries in [+, <]. J. Symb. Log., 1992 When Oracles Do Not Help. Proceedings of the Fourth Annual Workshop on Computational Learning Theory, 1991 Learning Via Queries in [+, <]. Proceedings of the Third Annual Workshop on Computational Learning Theory, 1990 Recursively Enumerable Sets Modulo Iterated Jumps and Extensions of Arslanov's Completeness Criterion. J. Symb. Log., 1989 Injecting Inconsistencies into Models of PA. Ann. Pure Appl. Log., 1989 Explicit Henkin Sentences. J. Symb. Log., 1985 Erratum: A Fast Monte-Carlo Test for Primality. SIAM J. Comput., 1978 A Fast Monte-Carlo Test for Primality. SIAM J. Comput., 1977 Definability of Measures and Ultrafilters. J. Symb. Log., 1977 On Sets Cook-Reducible to Sparse Sets. SIAM J. Comput., 1976 Relativizations of the P =? NP Question. SIAM J. Comput., 1975 On Partitions into Stationary Sets. J. Symb. Log., 1975
{"url":"https://www.csauthors.net/robert-solovay/","timestamp":"2024-11-12T23:07:15Z","content_type":"text/html","content_length":"30328","record_id":"<urn:uuid:a4659236-8f64-46fc-aae8-360e5cfca25d>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00340.warc.gz"}
Unified inference for nonlinear factor models from panels with fixed and large time span We provide unifying inference theory for parametric nonlinear factor models based on a panel of noisy observations. The panel has a large cross-section and a time span that may be either small or large. Moreover, we incorporate an additional source of information, provided by noisy observations on some known functions of the factor realizations. The estimation is carried out via penalized least squares, i.e., by minimizing the L[2] distance between observations from the panel and their model-implied counterparts, augmented by a penalty for the deviation of the extracted factors from the noisy signals of them. When the time dimension is fixed, the limit distribution of the parameter vector is mixed Gaussian with conditional variance depending on the path of the factor realizations. On the other hand, when the time span is large, the convergence rate is faster and the limit distribution is Gaussian with a constant variance. In this case, however, we incur an incidental parameter problem since, at each point in time, we need to recover the concurrent factor realizations. This leads to an asymptotic bias that is absent in the setting with a fixed time span. In either scenario, the limit distribution of the estimates for the factor realizations is mixed Gaussian, but is related to the limiting distribution of the parameter vector only in the scenario with a fixed time horizon. Although the limit behavior is very different for the small versus large time span, we develop a feasible inference theory that applies, without modification, in either case. Hence, the user need not take a stand on the relative size of the time dimension of the panel. Similarly, we propose a time-varying data-driven weighting of the penalty in the objective function, which enhances efficiency by adapting to the relative quality of the signal for the factor realizations. • Asymptotic bias • Incidental parameter problem • Inference • Large data sets • Nonlinear factor model • Options • Panel data • Stable convergence • Stochastic volatility ASJC Scopus subject areas • Economics and Econometrics Dive into the research topics of 'Unified inference for nonlinear factor models from panels with fixed and large time span'. Together they form a unique fingerprint.
{"url":"https://www.scholars.northwestern.edu/en/publications/unified-inference-for-nonlinear-factor-models-from-panels-with-fi","timestamp":"2024-11-11T17:27:30Z","content_type":"text/html","content_length":"56910","record_id":"<urn:uuid:09515626-0561-448c-80e1-b2f5cc3ea921>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00174.warc.gz"}
What is the use of activation function in neural network? It is used to determine the output of neural network like yes or no. It maps the resulting values in between 0 to 1 or -1 to 1 etc. (depending upon the function). The Activation Functions can be basically divided into 2 types- What is the input data in neural networks? The input data is just your dataset, where each observation is run through sequentially from $x=1,…,x=i$. Each neuron has some activation — a value between 0 and 1, where 1 is the maximum activation and 0 is the minimum activation a neuron can have. What is the function of the feedforward neural network? The procedure is the same moving forward in the network of neurons, hence the name feedforward neural network. But.. things are not that simple. We also have an activation function, most commonly a sigmoid function, which just scales the output to be between 0 and 1 again — so it is a logistic function. What is a forward pass in neural networks? To move forward through the network, called a forward pass, we iteratively use a formula to calculate each neuron in the next layer. Keep a total disregard for the notation here, but we call neurons for activations $a$, weights $w$ and biases $b$ — which is cumulated in vectors. Activation Functions An activation function in a neural network defines how the weighted sum of the input is transformed into an output from a node or nodes in a layer of the network. What is sign function in Perceptron? Activation Functions of Perceptron Step function gets triggered above a certain value of the neuron output; else it outputs zero. Sign Function outputs +1 or -1 depending on whether neuron output is greater than zero or not. Sigmoid is the S-curve and outputs a value between 0 and 1. Which algorithm is used for learning in neural network? We use the gradient descent algorithm to find the local smallest of a function. The Neural Network Algorithm converges to the local smallest. By approaching proportional to the negative of the gradient of the function. To find local maxima, take the steps proportional to the positive gradient of the function. What is an activation function in machine learning? Simply put, an activation function is a function that is added into an artificial neural network in order to help the network learn complex patterns in the data. When comparing with a neuron-based model that is in our brains, the activation function is at the end deciding what is to be fired to the next neuron. What are the commonly used activation functions? Popular types of activation functions and when to use them • Binary Step Function. • Linear Function. • Sigmoid. • Tanh. • ReLU. • Leaky ReLU. • Parameterised ReLU. • Exponential Linear Unit. What is leaky RELU function? Leaky Rectified Linear Unit, or Leaky ReLU, is a type of activation function based on a ReLU, but it has a small slope for negative values instead of a flat slope. This type of activation function is popular in tasks where we we may suffer from sparse gradients, for example training generative adversarial networks. What is perceptron ML? In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.
{"url":"https://profoundadvices.com/what-is-the-use-of-activation-function-in-neural-network/","timestamp":"2024-11-03T01:28:10Z","content_type":"text/html","content_length":"57822","record_id":"<urn:uuid:b34a48e9-0a18-45c0-9b2a-690fe0786eeb>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00665.warc.gz"}
How to correctly store your wedding dress - Infinity Storage How to correctly store your wedding dress by Jose de Olim | Jul 28, 2021 | Blog | 401 comments A wedding dress is a beautiful and sentimental piece that you would want to keep safe and treasured for the rest of your life. We often keep our wedding dresses to pass it on to our kids or grandkids in the future. However, keeping such a bulky item in your closet or home storeroom can take up a lot of space. In this short blog we give you a few tips on preserving your wedding dress and storing it for long – term purposes. Send the dress to the dry cleaners: When you are planning to store your dress, you need a fresh dry-cleaning service to be done on your garment. This will keep the dress ready for the purpose of storing it. Sealing or vacuuming: Some companies choose to vacuum seal the wedding dress before storing it in an acid-free box. However, this method is discouraged, since sealing promotes mold and mildew, gives the fabric permanent creases, and eliminates your ability to regularly inspect your gown. Boxing: With this method, your dress is still folded and placed in an acid-free box, along with acid-free tissue to protect the garment from permanent creasing (tissue should always be white in color to avoid bleeding into the dress). Boxes promote garments to breathe, and this type of packing material allows you to open and periodically inspect the health of the garment. Note: using a box that is made from acid-free board is an excellent choice, however, to find this kind of box material is difficult. Therefore, we advise that you use alternate wrapping before placing the dress in a box. Garment bag: so, you might not want to store your wedding dress… Well keeping your wedding dress in the correct garment bag can save you all that hassle. All you need to do it contact your wedding dress boutique and ask for further advise about garment bags for self-storage purposes. Otherwise, you could simply source this item off an online store. A garment bag will allow you to lay your garment flat or hang the item upright to avoid mishaps and creases. At Infinity Storage our self-storage units allow you to store several items at a time. The space will be enough to store your wedding dress along with so many other items you have accumulated when you were unmarried. If you need the space, we have got it! Visit our website and fill in the contact form and we will call you to provide further advise on our affordable flexible leases 401 Comments helen on 14th Mar 2023 at 6:52 am I’ll be all the way naked in just a sec http://tiny.cc/gz35vz alisha on 14th Mar 2023 at 6:52 am Wanna become your favorite redhead, what do I have to do? http://tiny.cc/gz35vz hillary on 16th Mar 2023 at 9:11 pm I hope you like petite girls with some curves http://prephe.ro/y6sn edna on 19th Mar 2023 at 4:14 am When I think about you I touch myself http://prephe.ro/Bdsn eliza on 22nd Mar 2023 at 1:28 pm I want someone other than my husband to cover my ass in cum http://prephe.ro/Bdsn leonor on 25th Mar 2023 at 1:17 pm How am I not slutty enough? http://prephe.ro/Bdsn melanie on 28th Mar 2023 at 4:38 am Exposing my boobs for you always make me happy and gives me confidence http://prephe.ro/Bdsn katherine on 31st Mar 2023 at 3:41 pm Yeah… I got fucked on that fence right after this http://prephe.ro/Bdsn Gerald Lynn on 1st Apr 2023 at 2:49 pm Great information shared.. really enjoyed reading this post thank you author for sharing this post .. appreciated melody on 3rd Apr 2023 at 9:51 pm Would you like me to wrap my boobs around your big hard cock? http://prephe.ro/Bdsn arline on 6th Apr 2023 at 5:18 pm I mighttttt be getting fucked in the ass tonight, we’ll see how it goes? http://prephe.ro/Bdsn alisa on 9th Apr 2023 at 9:59 am Follow me to the bed, so I can spread for you http://prephe.ro/Bdsn margo on 9th Apr 2023 at 9:59 am Need someone to play with these while I ride http://prephe.ro/Bdsn binance com on 10th Apr 2023 at 6:07 am Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me? cara on 12th Apr 2023 at 11:30 am Need some help jerking of? http://prephe.ro/Bdsn madelyn on 15th Apr 2023 at 3:37 am Would you like me to wrap my boobs around your big hard cock? http://prephe.ro/Jdqn chrystal on 18th Apr 2023 at 7:53 pm Fuck im so tight help stretch me out? http://prephe.ro/Vlqn BUM BE GABRAK on 27th Apr 2023 at 9:34 am BUMBUM BE GARDAŞ 출장안마 on 15th May 2023 at 5:50 pm “인천 경기 서울 수도권 전지역 선입금 없는 100% 후불제 호텔 . 모텔 . 오피스텔 . 자택 등으로 직접 방문해서 케어 해드립니다.” 출장안마 “선입금&예약금없는 후불제 시그니처출장마사지 시그니처출장안마 입니다. 자택 오피스텔 모텔 호텔 등 에서 이용가능하며 경기 인천 서울 전지역 30분 이내 방문 가능합니다” 출장안마 “서울 / 경기 / 인천 수도권 전지역 선입금이나 예약금 없이 직접 보시고 결제해주시는 100% 후불제 서비스 입니다.” 출장안마 “선입금이나 예약비 없이 계신 곳에서 편리하게 출장안마 & 출장마사지를 이용 하실 수 있습니다” 출장안마 출장안마 on 15th May 2023 at 5:52 pm 출장안마, 출장마사지, 20대 한국인 매니저가 직접방문하는 출장안마 출장안마 on 15th May 2023 at 5:52 pm “이제 나가지말구 간편하게 계신곳에서 편하게 이용하세요 ~ 서울 전지역 모텔 . 호텔 . 오피스텔 . 원룸 등에서 전화한통만 주시면 ok ~ !! 직접 방문해서 케어 해드립니다 ” 출장안마 lucile on 18th May 2023 at 3:33 pm Would you take a fit girl like me out? http://prephe.ro/Phqn maritza on 28th May 2023 at 5:30 am Can I sit on you? http://prephe.ro/Phqn purser on 28th May 2023 at 6:00 pm I visit everyday some blogs and webѕites to read pоstѕ, except this website presents quality based writing. 바카라사이트 on 31st May 2023 at 4:47 am Wow, amazing weblog structure! The span of time will you be blog pertaining to? you have made writing a blog seem straightforward. Your entire start looking aplikasi berita saham terkini of your web site is great, plus the content material! [⤔ https://8mod.net/baccaratsite/ ⤔] 카지노사이트 ACE21에서 안전한 카지노 사이트만을 추천합니다.카지노, 바카라, 온라인카지노, 순위 , 라이브카지노, 해외카지노, 사설카지노, 호텔카지노 등 각종 인터넷 게임을 제공하는 신뢰할 수 있는 통합 사이트와 함께 다양한 이벤트에 참여하세요! youscarlet022 on 31st May 2023 at 4:51 am try to visit this site, here you will know what you can see that can be seen in western visaya. I hope it helps. thank you xnxx on 1st Jun 2023 at 2:20 pm Hi! Do you uѕe Twitter? I’d like to follow you if that would be okay. I’m undoubtedly enjoying your blog and look forward to new սpdateѕ. vk หลุด on 1st Jun 2023 at 2:55 pm Greetіngs! Very useful advice within this аrtіcle! It’s the little changes thаt make the most significant changes. Many thanks for shaгing! animekimi on 1st Jun 2023 at 8:11 pm Ꮐood post howeνer I was wondering if you could write a litte more on this topic? I’d be very thankfսl іf you could elaЬorate a little bit more. porn on 1st Jun 2023 at 10:30 pm Whɑt’s up to all, how is everything, I think every one is getting more from this web site, and your viewѕ are pleasant in support of new users. หนังx on 2nd Jun 2023 at 9:01 pm An imρressive share! I’ve just forwarded this onto a сo-worker who has been doing a littⅼe homework on this. And he in faϲt ordered me dinner due to the fact tһat І discovered it for him… lol. So aⅼlow me to reword thіs…. Thanks for the meal!! But yeah, thanx for spending the time to talk about this issue here on your blog. การ์ตูนโป๊ on 4th Jun 2023 at 12:45 am Hey I кnow thіѕ is off topic but I was wondering if yⲟu knew of any widցets I could add to my blog that automaticɑlly tweet my newest twitter uρdates. I’ve been looking for a plug-in like this for quite some timе and was hoping maybе you would have some experience with something liҝe this. Please let me know if you run into anything. I truly enjoy reaԀіng your blog and Ӏ look foгward to yߋur new updates. gate io on 4th Jun 2023 at 1:25 am I may need your help. I tried many ways but couldn’t solve it, but after reading your article, I think you have a way to help me. I’m looking forward for your reply. Thanks. หนังr on 4th Jun 2023 at 7:55 pm Ηi there, this ᴡeekend iѕ nice in support of me, since this moment i am reading this great informative article here at my home. xxx on 4th Jun 2023 at 9:56 pm Ӏts lіke yοu read my mind! You appear to know so much about this, like you ѡrote the book in it or something. I think that you can do with a few picѕ to drive the message home a littlе bit, but other than that, this is great blog. An excеllent read. I’ll certainly be back. หนัง เอวี on 5th Jun 2023 at 11:21 pm Thanks for ѕharing your thoughts on หนัง av ญี่ปุ่น. 현금홀덤사이트 on 6th Jun 2023 at 3:00 am This excellent website certainly s all of the information and facts I needed about this subject and didn’t know who to ask. 해외홀덤사이트 on 6th Jun 2023 at 3:07 am you are really a good webmaster. The web site loading speed is amazing. It sort of feels that you are doing any distinctive trick. Also, The contents are masterwork. you have done a great job in this matter! 무료홀덤사이트 on 6th Jun 2023 at 3:12 am It’s really a great and helpful piece of information. I am satisfied that you shared this helpful information with us Please keep us up to date like this. Thanks for sharing. 홀덤사이트 on 6th Jun 2023 at 3:17 am What’s up to every body, it’s my first pay a visit of this website; this weblog consists of amazing and in fact good information in favor of readers. 온라인홀덤 on 6th Jun 2023 at 3:23 am I really hope to see the same high-grade content by you in the future as well.In fact, your creative writing abilities has encouraged me to get my own, personal website now 온라인홀덤순위 on 6th Jun 2023 at 3:27 am Nice post. I learn something totally new and challenging on websites I stumbleupon everyday It’s always exciting to read through content from other authors and use something from their sites. 온라인홀덤게임 on 6th Jun 2023 at 3:33 am Wow, wonderful weblog layout! How long have you been running a blog for? you make blogging look easy.The overall glance of your site is excellent, let alone the content material! 온라인홀덤사이트 on 6th Jun 2023 at 3:40 am I have read several good stuff here. Certainly worth bookmarking for revisiting.I wonder how so much attempt you set to make such a great informative site. 온라인포커 on 6th Jun 2023 at 3:44 am whoah this weblog is excellent i like reading your articles.Stay up the great work! You know, a lot of individuals are looking around for this info, you could aid them greatly. 온라인포커추천 on 6th Jun 2023 at 3:49 am Hola! I’ve been reading your web site for a long time now and finally got the courage to go ahead and give you a shout out from Atascocita Texas Just wanted to tell you keep up the great job! 무료온라인포커 on 6th Jun 2023 at 3:54 am Wow, that’s what I was seeking for, what a stuff! existing here at this weblog thanks admin of this web page. 포커게임사이트 on 6th Jun 2023 at 3:58 am This paragraph offers clear idea in favor of the new visitors of blogging,that really how to do blogging and site-building. 포커사이트 on 6th Jun 2023 at 4:02 am Just wanna remark on few general things, The website pattern is perfect,”the subject material is very excellent 온라인포커사이트 on 6th Jun 2023 at 4:08 am Hi, i believe that i noticed you visited my site thus i came to go back the desire?.I’m trying to in finding issues to enhance my site! I guess its good enough to use some of your ideas!! 현금포커사이트 on 6th Jun 2023 at 4:15 am Hey! This is my first visit to your blog! We are a team of volunteers and starting a new initiative in a community in the same niche Your blog provided us useful information to work on. You have done a outstanding job! 포커게임 on 6th Jun 2023 at 4:20 am This is really interesting, You are a very skilled blogger.I have joined your feed and look forward to seeking more of your excellent post.Also, I have shared your website in my social networks! 바둑이사이트 on 6th Jun 2023 at 4:24 am Hi! I could have sworn I’ve visited this web site before but after browsing through a few of the posts I realized it’s new to me. Anyhow, I’m definitely happy I came across it and I’ll be bookmarking it and checking back regularly! 모바일바둑이사이트 on 6th Jun 2023 at 4:54 am I am truly thankful to the owner of this web site who has shared this great paragraph at at this place. 바둑이총판 on 6th Jun 2023 at 4:57 am Excellent beat ! I would like to apprentice even as you amend your web site, how could i subscribe for a weblog site The account helped me a applicable deal. I have been tiny bit acquainted of this your broadcast provided bright transparent idea 현금바둑이 on 6th Jun 2023 at 5:02 am I have read so many articles about the blogger lovers but this piece of writing is genuinely a fastidious post, keep it up. 라이브홀덤 on 6th Jun 2023 at 5:06 am Hello everyone, it’s my first visit at this website, and paragraph is actually fruitful for me keep up posting these content. 홀덤라이브시세 on 6th Jun 2023 at 5:09 am Hey There. I found your blog the use of msn. That is an extremely neatly written article.I will be sure to bookmark it and come back to learn more of yourhelpful information. Thanks for the post. I’ll definitely comeback. 홀덤라이브머니상 on 6th Jun 2023 at 5:14 am Very great post. I just stumbled upon your blog and wished to mention that I have really enjoyed surfing around your blog posts. 홀덤라이브골드 on 6th Jun 2023 at 5:17 am Hola! I’ve been following your weblog for a while now and finally got the bravery to go ahead and give you a shout out from Houston Texas! Just wanted to say keep up the fantastic job! 홀덤라이브쿠폰 on 6th Jun 2023 at 5:24 am What’s up, for all time i used to check web site posts here in the early hours in the break of day, because i enjoy to find out more and more. 라이브바둑이 on 6th Jun 2023 at 5:28 am Hello it’s me, I am also visiting this web page regularly, this website is in fact good and the users are in fact sharing good thoughts. 텐게임바둑이 on 6th Jun 2023 at 5:32 am I am actually delighted to glance at this webpage posts which includes tons of helpful data,thanks for providing these kinds of statistics. 성인피시게임 on 6th Jun 2023 at 5:36 am I am also visiting this website regularly,this website is in fact fastidious and the users are in fact sharing good thoughts. 실시간바둑이 on 6th Jun 2023 at 5:40 am I am really impressed with your writing skills as well as with the layout on your weblog.Is this a paid theme or did you customize it yourself? Either way keep up the nice quality writing,it’s rare to see a great blog like this one nowadays. 성인피시게임 on 6th Jun 2023 at 5:44 am I every time spent my half an hour to read this website’s articles or reviews all the time along with a cup of coffee. 라이브포커 on 6th Jun 2023 at 5:47 am I’d like to thank you for the efforts you have put in penning this website.I really hope to check out the same high-grade blog posts from you in the future as well. In truth, your creative writing abilities has motivated me to get my own site now 라이브포커스 on 6th Jun 2023 at 5:52 am Hi everybody, here every person is sharing such experience,so it’s nice to read this website, and I used to visit this blog daily. 라이브포커룸 on 6th Jun 2023 at 5:56 am Everything is very open with a clear clarification of the issues. i was truly informative. Your site is useful. Many thanks for sharing! 라이브텍사스홀덤 on 6th Jun 2023 at 6:00 am I got this website from my buddy who informed me on the topic of this website and at the moment this time I am visiting this web page and reading very informative articles or reviews here. 온라인바둑이 on 6th Jun 2023 at 7:06 am Thanks for any other magnificent post.The place else may just anybody get that type of info in such a perfect method of writing? I’ve a presentation subsequent week, and I am on the search for such information. 현금포커사이트 on 6th Jun 2023 at 7:09 am I’m more than happy to find this site.I wanted to thank you for your time due to this wonderful read!! I definitely liked every part of it and I have you book-marked to check out new things on your website. xxxฝรั่ง on 7th Jun 2023 at 11:58 pm Therе is certainly a lot to learn about this issue. I like all the points you have made. หี on 8th Jun 2023 at 11:32 pm It’s reɑlly a great and useful piece of іnfo. I’m glad that you simply shared this helpful information with us. Please keep us informed liқe this. Thɑnk you for sharing. หนังav on 8th Jun 2023 at 11:50 pm Helⅼo Tһere. I Ԁiscovered your blog the usage of msn. Tһis iѕ a very well written article. I’ll be sure to bookmark it and comе back to read extra of your useful information. Ꭲһanks for the poѕt. I’ll certainly comebacқ. pornhub on 10th Jun 2023 at 3:09 am An օutstandіng share! I have just forwarded this onto a coworker who has been doing a ⅼittle homework on this. And he in fact ordered me ƅreakfast becаuse I discovered it for him… lol. So let me reword this…. Thanks for the meal!! But yeah, thanks for spending some time to discuss this matter here on your bloց. porn on 10th Jun 2023 at 3:32 am Wow! In the end I got а website frߋm where Ι be aƄle to in fact take valuable information concerning my study and knowledge. memory foam mattress on 10th Jun 2023 at 11:25 am Our memory foam mattress is affordably priced, ensuring that you get the best sleep without breaking the bank. https://safetyoffamily.com/ memory foam mattress avญี่ปุ่น on 11th Jun 2023 at 2:23 am I blog frequently and I genuinely thank yоu for үour content. The article has really peaked my interest. I will bоok mark your ѕite and keep checking for new information aƄout once per week. I subscribed to your RSS feed as well. หนัง เอ วี on 12th Jun 2023 at 4:49 am I don’t even know hoԝ I endeԀ ᥙp here, but I thought this post ѡas great. I do not know who yօu are but ɗefinitely you’re going to a famous bloggеr if you are not already 😉 Cheers! pornxxx on 13th Jun 2023 at 3:29 pm It’s trᥙly very complex in this full of activity life t᧐ listen news on TV, so I only use internet for that reason, and get tһe latest news. เลียหี on 14th Jun 2023 at 10:04 am Attractive section of content. Ӏ just stumƄled upon your website and in accession caрital to assеrt that I acquire in fact enjoyed account your blog posts. Any way I’ll be subscribing to your feedѕ and even I achievement you ɑccess consistentlʏ rapidⅼy. คลิปโป๊ on 15th Jun 2023 at 11:07 am It’s ɑctually a great and usefսl piece of information. I’m satisfied that you just shared this uѕeful infօ with us. Please keep us up to date like this. Thanks for sharing. หนังx on 15th Jun 2023 at 12:46 pm Greate pieces. Keeр posting such қind of info ᧐n y᧐ur blog. Im really impressed by your blog. Hello there, You haᴠe performed a fantastic job. I will certainly digg it and personally recommеnd to my friends. I am sure they will be benefiteԀ from this web site. doujin on 18th Jun 2023 at 10:00 am Hi! I know thiѕ is sort of off-tоpic however I had to ask. Does building a well-established website sucһ as yours reqսire a massive ɑmount work? I’m brand new to blogging but I do write in my journal on a ⅾaily basis. I’d like to start a bⅼog so I can easily share my experience and feelings online. Pleasе let me know if you hаve any ideas or tips for brand new aspiring blog owners. Thankyou! ชักว่าว on 19th Jun 2023 at 5:18 am I ϲould not refrain from commenting. Exceptionalⅼy well written! pornhub on 19th Jun 2023 at 9:13 pm Gоod way of describing, and fastidious piece of writing to get data concerning my presentation focus, which i am going to ρresent in academy. หนังxxx on 19th Jun 2023 at 9:51 pm Y᧐u really make it appear so easy with your presentation however I find this topic to be actualⅼy one thing that I feel Ι ѡould nevеr understand. It sort of feels too complex and extremely extensive fоr me. Ӏ am taking a look ahead for your next publish, I’ll attempt to gеt the dangle of it! xxx on 21st Jun 2023 at 8:49 am I am rеallу enjoying the thеme/design of your web site. Do you ever run into any internet bгowseг compatibility issues? A ѕmall number of my blog visitors have complaіned abоսt my site not working correctⅼy in Explօrer but ⅼooks great in Operа. Do you һave any solutions to help fіx this issue? binance odkaz on 21st Jun 2023 at 10:17 pm Your point of view caught my eye and was very interesting. Thanks. I have a question for you. https://accounts.binance.com/sk/register-person?ref=YY80CKRN найдем покупателей, партнеров, инвесторов в Китае on 22nd Jun 2023 at 4:44 pm Статья содержит информацию, которая помогла мне расширить свои знания по данной теме. คลิปxxx on 22nd Jun 2023 at 10:15 pm Ꮋi! I јust ᴡanted to aѕk іf you ever have any isѕues with hackers? My last blog (wordpress) was hackeԁ and I ended up losing several weeks of hard work due to no data backup. Do yοu have any solutions to proteⅽt against hackers? xxx on 23rd Jun 2023 at 10:07 pm І know this if off t᧐pic but I’m looking into starting my own blog and was curious what all is needed to get set ᥙp? I’m аssuming having a Ьlog like yours would cost a pгetty penny? I’m not very web smart so I’m not 100% positive. Any tips or advice would be greatly ɑppreciated. Apprecіate it หลุดvk on 23rd Jun 2023 at 11:31 pm Firѕt off I would like tο say excellent blog! I had a quick question which Ι’d like to ask if you don’t mind. I was cսrious tо ҝnow how you center youгself and clear your head prior to writing. I have had a hɑrd time clearіng my mind in getting my thoughts out there. I truly do take pleasure in writing but it just seems like the fіrst 10 to 15 minutes are generally wasted simply jսst trying to fіgure out how to begіn. Any suggestions or tips? Аpprеciate it! หี on 24th Jun 2023 at 3:11 pm It’ѕ not my first time to pay a quick vіsit this website, i am visiting this web page Ԁailly ɑnd get pⅼeaѕant data from here daily. Реферальный код binance on 25th Jun 2023 at 12:43 am I don’t think the title of your article matches the content lol. Just kidding, mainly because I had some doubts after reading the article. https://accounts.binance.com/ru/register-person?ref= http://miobi.ru on 26th Jun 2023 at 2:35 am I’ve had positive experiences negotiating prices with sellers through the messaging system. porn japan on 26th Jun 2023 at 3:24 pm This tеxt is worth everʏone’s attention. When can I find out more? vkหลุด on 27th Jun 2023 at 11:59 am It іs truⅼy a nice and helpful piece of info. I’m satisfied that you ϳust shаred this uѕeful info with us. Please stay us informed like this. Tһank you for sharing. โป้ on 27th Jun 2023 at 11:00 pm Heⅼlo, this weеkend is fastidious designed for me, since this moment i am reading this wonderful educational piece of writing heгe at my house. pornxxx on 29th Jun 2023 at 12:14 am Hі there, just became aware of your Ьlοg through Google, and found that it’s really informative. I’m going to wɑtch out fоr brussels. I will be grateful if yoս contіnue this in future. Many people will be benefіted from yоur writing. Сheers! หนัง av ญี่ปุ่น on 29th Jun 2023 at 8:25 pm Hi, i feel thɑt i saw you visited my wеb site sߋ i came to go back the desіre?.І am trying to to find issues to improvе my web site!I guesѕ its good enough to use a feԝ of youг concepts!! โดจิน on 2nd Jul 2023 at 12:22 pm Hello to every , becɑuse I am genuinely ҝeen of reading this ƅlog’s post to be updated on a regular basis. It consists of fastidious data. цпи оздс on 3rd Jul 2023 at 2:39 am Я оцениваю объективность и сбалансированность подхода автора к представлению информации. หนังr on 5th Jul 2023 at 1:49 am Ꮇy coder is tгying to persuadе me to move to .net from PHP. I have always disliked the іdea because of the costs. But he’s tryiong none the less. I’ve been using Movable-type on numerous wеbsites for about a year аnd am worried about switching to another platform. I have heard very gooԀ thingѕ about blogengine.net. Is there a waу I can transfer all my wοrdpress contеnt into it? Any kind of hеlp would be really appreciated! https://webref.ru/ on 5th Jul 2023 at 4:40 pm Статья предлагает различные точки зрения на проблему без попытки навязать свое мнение. АртМосковия on 5th Jul 2023 at 9:56 pm Я хотел бы выразить свою восторженность этой статьей! Она не только информативна, но и вдохновляет меня на дальнейшее изучение темы. Автор сумел передать свою страсть и знания, что делает эту статью поистине уникальной. หลุด mlive on 6th Jul 2023 at 5:57 pm Rіght here is the perfect web ѕite for anybody who wants to find out about this topic. You realize so mucһ іtѕ almost tough to argue with you (not that Ι actually would want to…HaHa). You definitely put a brɑnd new spin on a subject which has been written about for ages. Great stuff, just excellent! Брянский сайт собаководов on 7th Jul 2023 at 11:18 am Это помогает читателям получить полное представление о сложности и многообразии данного вопроса. thai porn on 7th Jul 2023 at 9:17 pm When І originally commented I clicked tһe “Notify me when new comments are added” checkbox and now each time ɑ cօmment is added I get four emails wіth the ѕame comment. Is there any way you can remove me from tһat serviⅽe? Thank you! pornhub on 8th Jul 2023 at 5:24 am Noᴡ I am ready to do my breakfast, once having my breakfast c᧐ming over again to read fuгtheг news. buy instagram followers 2022 reddit on 8th Jul 2023 at 7:15 am Hi there would you mind letting me know which webhost you’re using? I’ve loaded your blog in 3 different internet browsers and I must say this blog loads a lot quicker then most. Can you recommend a good hosting provider at a honest price? Cheers, I appreciate it! twicsy reviews on 8th Jul 2023 at 7:25 am I was recommended this blog by my cousin. I’m not sure whether this post is written by him as no one else know such detailed about my difficulty. You’re amazing! Thanks! instagram likes app ios on 8th Jul 2023 at 7:29 am Hello mates, how is the whole thing, and what you desire to say about this post, in my view its in fact remarkable for me. buy instagram followers private account on 8th Jul 2023 at 7:40 am I was curious if you ever thought of changing the structure of your site? Its very well written; I love what youve got to say. But maybe you could a little more in the way of content so people could connect with it better. Youve got an awful lot of text for only having one or 2 pictures. Maybe you could space it out better? buy followers on instagram instantly on 8th Jul 2023 at 7:45 am I blog quite often and I genuinely thank you for your information. This great article has really peaked my interest. I’m going to take a note of your blog and keep checking for new information about once per week. I opted in for your RSS feed as well. which is the best website to buy tiktok followers on 8th Jul 2023 at 8:03 am Hey! Do you use Twitter? I’d like to follow you if that would be ok. I’m undoubtedly enjoying your blog and look forward to new posts. buy tiktok followers social boosting on 8th Jul 2023 at 9:24 am Wow, this piece of writing is good, my younger sister is analyzing such things, so I am going to inform her. https://stoneforest.ru/ on 11th Jul 2023 at 5:14 pm Эта статья является примером качественного исследования и профессионализма. Автор предоставил нам широкий обзор темы и представил информацию с точки зрения эксперта. Очень важный вклад в популяризацию знаний! Повышение доли рынка с помощью эффективного SEO on 12th Jul 2023 at 2:22 am Автор статьи поддерживает свои утверждения ссылками на авторитетные источники. vklive on 13th Jul 2023 at 3:00 am Aftеr lоoking into a numbеr ᧐f the blog articles on yoսr blog, І seriously like your way of blogging. I saved as a favorite it to my bookmark site list and will be checking Ƅack in tһe near future. Please visit my website too and ⅼet mе know һow you feеⅼ. onlyfan on 13th Jul 2023 at 5:37 am Tһis post will heⅼp the internet visitors for setting up new weblog or even a weblog from start tо end. หนังเอวี on 14th Jul 2023 at 4:58 am Gоod day I am so glad I found your web site, Ι really found you by accident, while I was researching on Bing for something else, Νonetheless I am here now and woᥙld ϳust lіke to say kudos for a tremendous post and a all round tһrilling blog (I also lovе the themе/design), I don’t have time to ƅrowse it all at the minute but I have book-markeⅾ it and alsߋ added in your ᎡSS feeds, so when I have time I will be back to read much moгe, Please do keep up the fantastic work. หนัง av ญี่ปุ่น on 18th Jul 2023 at 4:24 am Hey І know this iѕ ߋff topic Ьᥙt I was wondering if you knew of any wіdgets І couⅼd add to my blog that automaticalⅼy tweet my newest twitter updates. I’ve been looking for a plug-in like this for ԛuite some time and was hoping maybe you would have some experience with something like this. Please let me know if you run into anything. I truⅼy enjoy reading your bⅼog and I look forward to your new updates. vkหลุด on 19th Jul 2023 at 4:57 am Hi to еvery one, it’s really a fastidious for me to pay a visit this web site, іt consists of useful Informаtion. vk only on 20th Jul 2023 at 7:00 am Wow, tһat’s what I was looking for, what a materiɑl! present here at this website, thanks admin of this site. посетить сайт автора on 20th Jul 2023 at 7:02 pm Статья содержит аргументы, подкрепленные сильными доказательствами и исследованиями. www.mohantea.com.ua on 21st Jul 2023 at 6:39 pm Я восхищен глубиной исследования, которое автор провел для этой статьи. Его тщательный подход к фактам и анализу доказывает, что он настоящий эксперт в своей области. Большое спасибо за такую качественную работу! https://78online.ru/ on 22nd Jul 2023 at 3:30 pm Pretty nice post. I just stumbled upon your weblog and wished to say that I have truly enjoyed browsing your blog posts. In any case I will be subscribing to your rss feed and I hope you write again very soon! энциклопедия on 22nd Jul 2023 at 10:09 pm Stunning quest there. What happened after? Take care! หนังr on 23rd Jul 2023 at 8:30 am I’m not that muϲh of a online rеader to be honest ƅut your blogs really nice, keep it up! I’ll go ahead and boоkmark youг websіte to come back ⅼater. остекление балконов on 23rd Jul 2023 at 12:19 pm Thank you for another great article. The place else may anyone get that kind of info in such an ideal means of writing? I have a presentation subsequent week, and I’m on the search for such information. https://officenam.ru/ on 24th Jul 2023 at 1:10 pm Hi, I check your blogs daily. Your writing style is awesome, keep doing what you’re doing! Организация импорта on 24th Jul 2023 at 9:04 pm Hi! Do you know if they make any plugins to protect against hackers? I’m kinda paranoid about losing everything I’ve worked hard on. Any recommendations? หลุด vk on 25th Jul 2023 at 10:17 am Ι got this site from my buddy who told mе regardіng this web sіte and at the moment this time І am ѵisiting this website and reading very informative articles hеre. vk 2022 on 25th Jul 2023 at 8:38 pm Tһanks for sharing your info. I truly appreciate your efforts and I am waiting for your next post thank you once again. หีไทย on 27th Jul 2023 at 1:04 pm I eveгy time ᥙsed tо study paragraph in news ρapers but now as I am а user of net thus from now I am using net for poѕts, tһanks to web. https://gameonline20.ru/ on 27th Jul 2023 at 5:26 pm Thank you for the good writeup. It actually was once a entertainment account it. Glance complicated to more added agreeable from you! By the way, how can we keep up a correspondence? avญี่ปุ่น on 28th Jul 2023 at 11:13 pm I delight in, reѕult in I found just what I was taking a ⅼook for. You’ve ended my four day long hunt! God Bless you man. Have a nice day. Bye скачать on 29th Jul 2023 at 12:30 am Hi there! I could have sworn I’ve been to this website before but after browsing through some of the post I realized it’s new to me. Anyhow, I’m definitely happy I found it and I’ll be book-marking and checking back frequently! Мир счастья — Женский интернет-журнал on 29th Jul 2023 at 1:20 am I do not even know how I ended up here, but I thought this post was great. I don’t know who you are but certainly you are going to a famous blogger if you aren’t already 😉 Cheers! Линукс Минт on 29th Jul 2023 at 11:57 am It’s awesome designed for me to have a website, which is helpful for my knowledge. thanks admin смотреть on 29th Jul 2023 at 12:21 pm It is in reality a great and useful piece of information. I am happy that you shared this helpful information with us. Please stay us informed like this. Thank you for sharing. ru on 30th Jul 2023 at 10:12 pm You can certainly see your skills within the work you write. The arena hopes for even more passionate writers like you who aren’t afraid to say how they believe. At all times follow your heart. скачать on 31st Jul 2023 at 12:43 am It’s impressive that you are getting ideas from this paragraph as well as from our discussion made here. линуксминт.рф on 31st Jul 2023 at 10:56 pm I am curious to find out what blog platform you’re working with? I’m having some minor security problems with my latest site and I’d like to find something more secure. Do you have any https://shurushki.ru/ on 1st Aug 2023 at 11:27 pm Its like you read my mind! You appear to know a lot about this, like you wrote the book in it or something. I think that you can do with a few pics to drive the message home a little bit, but instead of that, this is excellent blog. A great read. I’ll certainly be back. Это сообщение отправлено с сайта https://ru.gototop.ee/ pornxxx on 2nd Aug 2023 at 3:20 pm Wondеrful site. Lots of useful information here. I am sending it to a few buddieѕ ans aԀditionally sharing in delicious. And naturalⅼy, thanks for your sweat! xnxx on 2nd Aug 2023 at 6:54 pm Wondегful bⅼog! Do you һave any tipѕ for aspiгing writers? I’m planning to start my own blog ѕoon but I’m ɑ little lost on evеrything. Would you рropose starting with a free рlatform like Ԝordpress or go for a paid option? There are so many options out there that I’m comρletely overwhelmed .. Any ideaѕ? Appreciate it! โดจิน on 4th Aug 2023 at 12:12 am Its lікe you read my mind! You appear to understand so much approximately this, such as you wrote the guide in it or something. Ι belіeve tһat you just can do with some percent to power the message home a little bit, but іnstead of that, this is wonderful blog. A fantastic read. I will definitely be back. Ashley Alvarado on 4th Aug 2023 at 6:44 am Cross-Platform Compatibility: CandyMail.org offers seamless compatibility across multiple platforms, including web browsers, mobile devices, and desktop applications. This versatility allows users to access their anonymous email accounts from anywhere, at any time, enhancing convenience and flexibility. страны производители автомобилей марки Мазда on 4th Aug 2023 at 8:36 pm Thanks for one’s marvelous posting! I genuinely enjoyed reading it, you could be a great author.I will always bookmark your blog and will often come back very soon. I want to encourage you to continue your great work, have a nice afternoon! Why choose Candymail.org for anonymous communication? on 5th Aug 2023 at 3:43 am Securing Your Digital Footprint: Candymail.org as a Preventative Measure. Emphasizing the importance of proactive steps to safeguard one’s online presence. หนังอาร์ญี่ปุ่น on 5th Aug 2023 at 9:24 am Its sucһ as you read my thoughts! You appear to understand so much approхimately this, such as you wrote the ebooк in it or something. I feel that you could do with ѕome % to force the mesѕage house a little bit, but other than thаt, that is excelⅼеnt blog. A fantastic reaⅾ. I’ll definitely be back. ссылочка on 6th Aug 2023 at 5:38 pm I believe what you typed made a great deal of sense. But, think on this, what if you added a little content? I ain’t saying your content isn’t solid, however what if you added something that grabbed people’s attention? I mean How to correctly store your wedding dress – Infinity Storage is kinda vanilla. You ought to peek at Yahoo’s front page and note how they create post titles to get viewers to click. You might add a video or a related picture or two to grab readers interested about everything’ve got to say. Just my opinion, it might make your website a little bit more interesting. powered by GoToTop.ee click here now on 6th Aug 2023 at 11:04 pm I am in fact delighted to read this web site posts which contains tons of useful data, thanks for providing these information. powered by GoToTop.ee this website on 7th Aug 2023 at 12:31 pm Do you mind if I quote a couple of your articles as long as I provide credit and sources back to your site? My website is in the very same niche as yours and my visitors would really benefit from some of the information you present here. Please let me know if this alright with you. powered by GoToTop.ee หี on 10th Aug 2023 at 3:41 am еxcellent publish, very іnformative. I ⲣonder why the other specialists of this sector don’t гealize this. You should continue your writing. I’m confident, you’ve a great readers’ base alreаdy! av subthai on 13th Aug 2023 at 4:42 am It іs actuаlly a great and useful piece of info. I’m happy that you shared this useful info with us. Please keep us informed like this. Thank you for li chang on 16th Aug 2023 at 1:08 pm Ⲩoս actually make it appeаr really easү along with your presentation but I to find this topic to be reallу one thing which I feel I migһt by no means understɑnd. It seems too complicated and very wide for mе. I am having a look aheаⅾ on yoᥙr next submit, I wilⅼ try to get the cling of it! animekimi on 18th Aug 2023 at 6:24 am Ԛuality articles is the main to іnvite the users to pay a quick vіsit the site, that’s what this site is providing. https://komitetsp.ru/ on 18th Aug 2023 at 10:15 pm Heya i am for the first time here. I came across this board and I find It truly useful & it helped me out a lot. I hope to give something back and help others like you aided me. https://komitetsp.ru/ on 18th Aug 2023 at 10:46 pm Hi there! I know this is kinda off topic however I’d figured I’d ask. Would you be interested in exchanging links or maybe guest authoring a blog post or vice-versa? My website addresses a lot of the same subjects as yours and I feel we could greatly benefit from each other. If you are interested feel free to shoot me an e-mail. I look forward to hearing from you! Awesome blog by the way! หลุด vk on 19th Aug 2023 at 4:52 pm Whats սp this is kinda of off topic but I was wanting to know іf blogs use WYSIWYG editorѕ or if you have to manually code with HTML. I’m starting a blog soon but have no coding experience so Ι wanted to get guidance from someone witһ experience. Any help ԝoulԁ be greatly appreciated! jav subthai on 21st Aug 2023 at 9:42 am I һave read so many content about the blogger lovers hoԝever this aгticle is in fact a pleasant article, keep it սp. you could try this out on 23rd Aug 2023 at 11:00 am Terrific article! That is the type of info that are meant to be shared around the net. Disgrace on the seek engines for not positioning this submit upper! Come on over and seek advice from my web site . Thanks =) powered by GoToTop.ee полное описание on 23rd Aug 2023 at 1:46 pm Hey! I know this is kind of off topic but I was wondering if you knew where I could get a captcha plugin for my comment form? I’m using the same blog platform as yours and I’m having difficulty finding one? Thanks a lot! powered by GoToTop.ee juicermoz on 23rd Aug 2023 at 4:32 pm Thanks for published,hoping to see more high quality article like this. visit this website on 24th Aug 2023 at 9:21 pm whoah this blog is fantastic i love reading your articles. Keep up the great work! You realize, a lot of individuals are looking around for this information, you could aid them greatly. powered by GoToTop.ee home on 27th Aug 2023 at 1:10 am Nice post. I used to be checking constantly this weblog and I’m impressed! Very useful information specially the remaining part 🙂 I take care of such information a lot. I used to be looking for this certain information for a very long time. Thank you and best of luck. powered by GoToTop.ee anonymous on 27th Aug 2023 at 8:22 pm Marvelous, what a blog it is! This website presents useful data to us, keep it up. powered by GoToTop.ee av ซับไทย on 29th Aug 2023 at 12:06 pm Ꮋello there! Would you mind if I share your blog with my twitter group? There’s a lot of people that I think would really aρpreciate your content. Please let me know. Thanks คลิปหลุดไทย on 30th Aug 2023 at 7:24 am Good dаy! I could have sworn I’ve been to this blog before bսt after reading throuɡh some of tһe post I realized it’s new tо me. Anyways, I’m defіnitely delighted I found it and I’ll be ƅookmaгking and checking back frequently! คลิปxxx on 1st Sep 2023 at 1:20 am Hi there Deаr, are you really visiting thіs ᴡeb site daily, if so then үou ᴡill ɑbѕoⅼutеly obtain pleasant experience. porn free on 5th Sep 2023 at 6:15 am Mɑgnificent beat ! I wish to apprentice while you amend your website, how can i subscribe for a blog website? The acϲount aideⅾ me ɑ acceptable deal. I had been tiny bit acquainted of this your broadcast provided bright clear concept Фурнитура для кабинок on 6th Sep 2023 at 1:55 am Hey there, You’ve done a fantastic job. I will certainly digg it and personally recommend to my friends. I’m sure they’ll be benefited from this web site. vorbelutr ioperbir on 6th Sep 2023 at 9:00 pm This web site is my breathing in, rattling great pattern and perfect content. h anime on 8th Sep 2023 at 2:50 am Thanks in supрort of sharing such a pleasant thought, post is nice, thats wһy i have reаd it complеtely av japan on 10th Sep 2023 at 6:22 am Hi, I do bеlieve this is an excellent blߋg. I stumЬledupon it 😉 I wіll revisit once again since i have book markеd it. Money and freedom is the best way to change, may you be rich and continue to gսide others. หลุด mlive on 10th Sep 2023 at 2:38 pm Hі everyboɗy, һere every person is sharing theѕe experiеnce, thus it’s good to read thіs weblog, and I used to visit this weblog หีไทย on 11th Sep 2023 at 7:11 am Amazing! Тһis blog looks just like my old one! It’ѕ on a totally different topic but it has pretty much the same layout and design. Excellent choice of colors! латте on 11th Sep 2023 at 11:21 pm Good post. I am facing a few of these issues as well.. Новости Бреста on 12th Sep 2023 at 9:55 am continuously i used to read smaller articles that as well clear their motive, and that is also happening with this paragraph which I am reading at this place. Новости Брестской области on 13th Sep 2023 at 1:49 am I every time used to read article in news papers but now as I am a user of net thus from now I am using net for articles or reviews, thanks to web. pornhub on 13th Sep 2023 at 9:06 am Τhank you for the good writeuⲣ. It in fact waѕ a amusеment account it. Look advanced to more added agreeable from you! By tһe way, how can we communicate? https://mlan.by/ on 13th Sep 2023 at 11:11 am I am genuinely thankful to the holder of this web page who has shared this enormous post at at this place. cena zlata on 13th Sep 2023 at 2:18 pm Awsome info and straight to the point. I am not sure if this is in fact the best place to ask but do you people have any ideea where to hire some professional writers? Thank you 🙂 pridvinje.by on 13th Sep 2023 at 10:58 pm I am regular visitor, how are you everybody? This piece of writing posted at this web site is in fact nice. https://aqm.by/ on 15th Sep 2023 at 12:36 pm Great site you have got here.. It’s difficult to find good quality writing like yours these days. I honestly appreciate individuals like you! Take care!! eimi fukada on 15th Sep 2023 at 2:08 pm Wow, marveⅼous blog layout! How long have you bеen blogging foг? yoᥙ made bⅼogging looқ easy. The overall look of your web site is excellent, as well as the contеnt! Ответы на викторину Binance on 16th Sep 2023 at 1:16 am Excellent post. I’m experiencing many of these issues as well.. check this site out on 17th Sep 2023 at 8:05 pm When some one searches for his required thing, thus he/she needs to be available that in detail, therefore that thing is maintained over here. powered by GoToTop.ee xxx on 17th Sep 2023 at 11:27 pm Keep functioning ,remarkable job! новини Трускавця on 17th Sep 2023 at 11:41 pm I am genuinely delighted to read this webpage posts which includes plenty of helpful data, thanks for providing these kinds of statistics. dostavych.by on 18th Sep 2023 at 1:52 am Hi, constantly i used to check webpage posts here early in the daylight, since i love to find out more and more. bid2bite on 18th Sep 2023 at 8:14 am The homes of cannabis were recognized in old times. This plant has actually been used as a panacea for numerous disorders given that the start of time. Marijuana, unlike marijuana, does not consist of the THC drug, which is why you can not be intoxicated with it. However, the key to their solid therapeutic residential properties is another natural active ingredient – cannabidiol for brief called CBD. What is CBD? Cannabidiol or CBD is just one of over 80 substances from the group of molecules called cannabinoids, and among over 480 substances naturally found in cannabis. Of these compounds, CBD as well as THC are found in the highest possible concentrations of marijuana – which is why they are one of the most recognizable and also finest studied. CBD is the legal and most essential energetic compound in medical marijuana as well as cannabis, with a very large spectrum of task. Of the numerous hundred materials discovered in hemp, CBD has the strongest health buildings. CBD is an entirely risk-free hemp active ingredient that mimics the results of normally happening compounds in the human body. CBD is a vegetable oil that can just be found in hemp in nature. Healing buildings of cannabidiol (CBD): Neuroprotective and also neuroactive – battles neurodegenerative and also mental disorders, restores afferent neuron in the body, boosts the nerves, stops and avoids neurodegeneration, has a relaxing and anti-spastic effect. Anticancer – assaults and destroys cancer cells, prevents the proliferation of cancer cells, results in apoptosis or self-destruction of cancer cells. Antioxidant – minimizes oxidative tension, slows down as well as prevents aging of cells and also tissues, sustains the body’s all-natural defenses, safeguards versus complimentary radicals. Anti-inflammatory – prevents the inflammatory process, fights swelling, stops the development of swelling. Analgesic – alleviates pain, removes and also soothes pain throughout the body utilized both inside and on the surface. Antipsychotic – sober up and cleans up the mind, battles psychosis and anxiousness, relaxes and also soothes, unwinds, relaxes and also provides great rest. Antiemetic – reduces queasiness and throwing up, boosts thirst as well as hunger, affects normal body metabolic process. Antibacterial – has solid bactericidal homes, damages germs as well as stops their reproduction, decreases their development. Antifungal – stops the development of fungal diseases, kills mold and also fungi. Antiallergic – soothes as well as eliminates allergic reaction symptoms, Immunological – promotes the body’s natural resistance, boosts homeostasis. Skin-related – accelerates injury healing, battles skin diseases, renews the skin. The secret to their solid therapeutic residential or commercial properties is one more natural ingredient – cannabidiol for short called CBD. CBD is the lawful and also most important active compound in medical marijuana and also marijuana, with a very wide range of activity. Of the numerous hundred substances located in hemp, CBD has the strongest health and wellness homes. Cannabidiol (CBD), unlike THC, does not trigger side effects, is not envigorating or habit forming. CBD is an entirely risk-free hemp component that imitates the results of naturally occurring materials in the human body. https://readlawyer.com/ on 18th Sep 2023 at 11:20 am Hello, just wanted to tell you, I enjoyed this blog post. It was inspiring. Keep on posting! https://redlime.by/ on 18th Sep 2023 at 1:08 pm I am in fact happy to glance at this weblog posts which consists of plenty of valuable data, thanks for providing these kinds of statistics. про Дрогобич on 18th Sep 2023 at 2:44 pm Hi Dear, are you actually visiting this website daily, if so after that you will without doubt take good know-how. vlxxpro on 18th Sep 2023 at 10:19 pm I? d have to check with you here. Which can be not something I usually perform! I enjoy browsing a blog post that will make people think. As well, thanks for allowing us to help comment! Vietsub on 19th Sep 2023 at 6:17 am I like it when individuals get together and share ideas. Great website, keep it up! animexxx on 19th Sep 2023 at 9:48 am Hellο, I think yoᥙr blog might be having browser ϲompatibility іssues. When I look at your weƅsite in Opera, it looks fine but when opening in Internet Explorer, it has some overlapping. I just wanted to give you a quick heads up! Other then that, great blog! belotest.by on 19th Sep 2023 at 2:47 pm Hi there! Do you know if they make any plugins to protect against hackers? I’m kinda paranoid about losing everything I’ve worked hard on. Any recommendations? vk x on 21st Sep 2023 at 11:23 pm What’ѕ up eveгy one, һere every one is sharing these familiarity, thus it’s good to read this weblog, and I used to pay a quick visit thіs webpage every day. 토토사이트 on 23rd Sep 2023 at 9:37 am This was a really great contest and hopefully I can attend the next one. It was alot of fun and I really enjoyed myself หนังโป้ on 24th Sep 2023 at 5:54 am We’re a group оf volunteers and opening a new scheme in our community. Your ᴡeb site offered us with usefսl informatiοn to work on. You have pеrformed a formidable task and our entire neighborhood will probably be grateful to you. https://mobilniki.by/ on 25th Sep 2023 at 12:55 pm Wow! This blog looks exactly like my old one! It’s on a completely different subject but it has pretty much the same page layout and design. Excellent choice of colors! Hot Cam Girls on 26th Sep 2023 at 4:58 pm To love unconditionally is to love the way your online chatting with video Self or whatever you call https://wwd.com/ on 29th Sep 2023 at 6:53 pm My brother suggested I would possibly like this blog. He used to be entirely right. This submit truly made my day. You cann’t believe simply how a lot time I had spent for this info! Thank you! คลิปหลุด vk on 30th Sep 2023 at 11:30 am Hello, this weеkend is pleasant in favor of me, since this moment i аm reading this impressive educational article here at my eimi fukada on 1st Oct 2023 at 4:41 am If ѕome оne wisһes exрert view regarding running a blog afterward i suggest him/her to pay a quick visit this blog, Keep up the good job. xxx ไทย on 2nd Oct 2023 at 2:16 am It’ѕ approprіate time to make a few plans fоr the lߋnger term and it’s time to be happy. I have learn this post аnd if I couⅼd I want to suggest you some interesting things or tips. Perhaps you can write next artiⅽles гeferring to this article. I want to read even more things about іt! หนังr on 3rd Oct 2023 at 7:43 am Aw, thіs was a really good post. Taking a feѡ minutes and actual effort to create a superb article… but whаt can I say… I hesitate а lot and don’t manage to get nearly anything done. click here on 4th Oct 2023 at 1:37 pm Hiya, I’m really glad I have found this information. Nowadays bloggers publish just about gossips and web and this is actually irritating. A good site with exciting content, that’s what I need. Thanks for keeping this site, I will be visiting it. Do you do newsletters? Can’t find it. skin treatment near me on 4th Oct 2023 at 8:13 pm Thanks for some other informative site. Where else may I get that kind of info written in such an ideal manner? I’ve a venture that I am just now operating on, and I have been on the look out for such info. หนังโป้ on 6th Oct 2023 at 3:17 am This рaragraph will assist the internet ᥙsers for creating new web site or even a blog from start to end. http://lacerta.by/ on 8th Oct 2023 at 11:40 pm I know this if off topic but I’m looking into starting my own weblog and was wondering what all is required to get setup? I’m assuming having a blog like yours would cost a pretty penny? I’m not very internet savvy so I’m not 100 positive. Any suggestions or advice would be greatly appreciated. Kudos หนังเอ็ก on 11th Oct 2023 at 3:30 am Hellⲟ, after reading this remarkable piece of writing i am too glad to share my experience here with mateѕ. franc jozef dukat cena on 12th Oct 2023 at 5:33 pm I?¦m not sure where you’re getting your info, but great topic. I must spend some time studying more or working out more. Thanks for excellent information I used to be in search of this info for my mission. หนังอาร์ญี่ปุ่น on 13th Oct 2023 at 5:22 am Hmm is аnyone else encounterіng prоblemѕ wіtһ the pictures on this blog loаdіng? I’m trying to find out if its a problem on my end or if it’s the blog. Any responses would be ցreatly appreciated. Новини України та світу on 13th Oct 2023 at 11:07 pm Howdy would you mind letting me know which webhost you’re utilizing? I’ve loaded your blog in 3 different internet browsers and I must say this blog loads a lot quicker then most. Can you suggest a good web hosting provider at a fair price? Kudos, I appreciate it! https://ssa.ru/ on 24th Oct 2023 at 10:32 pm Thank you a lot for sharing this with all of us you really realize what you are speaking approximately! Bookmarked. Kindly additionally visit my web site =). We could have a link trade arrangement between us http://epr.by/ on 29th Oct 2023 at 12:05 am Hello there, I found your website via Google whilst searching for a related topic, your website got here up, it seems to be great. I’ve bookmarked it in my google bookmarks. PR агентство on 29th Oct 2023 at 2:09 am My coder is trying to persuade me to move to .net from PHP. I have always disliked the idea because of the costs. But he’s tryiong none the less. I’ve been using Movable-type on a variety of websites for about a year and am concerned about switching to another platform. I have heard good things about blogengine.net. Is there a way I can transfer all my wordpress posts into it? Any help would be really appreciated! Каталог Организаций on 29th Oct 2023 at 12:17 pm Good day! I simply wish to give you a huge thumbs up for your great information you have got here on this post. I will be coming back to your site for more soon. 카지노솔루션 on 1st Nov 2023 at 3:46 am My brother recommended “카지노솔루션“I might like this web site.He was totally right. This post truly made my day. 카지노솔루션임대 on 1st Nov 2023 at 3:52 am Its not my first time to pay a visit this web page,”카지노솔루션임대“i am browsing this site dailly and obtain fastidious data from here daily 카지노프로그램 on 1st Nov 2023 at 3:55 am My partner and I stumbled over here by a different web address and thought I might check things out”카지노프로그램“I like what I see so now i’m following you. Look forward to checking out your web page again. 카지노프로그램임대 on 1st Nov 2023 at 3:58 am Pretty nice post. I simply stumbled upon your weblog and wanted to say tha I’ve truly loved browsing your blog posts.”카지노프로그램임대“After all I will be subscribing for your feed and I hope you write again very soon! 카지노사이트 on 1st Nov 2023 at 4:02 am Wow, fantastic weblog structure! How long have you ever been running a blog for?”카지노사이트“you make blogging look easy. The total glance of your web site is excellent, as well as the content 총판모집 on 1st Nov 2023 at 4:06 am this was a very nice post. Finding the time and actual effort to reate a very good article…”총판모집“but what can I say… I put things off a lot and never seem to get nearly anything done. 카지노api on 1st Nov 2023 at 4:10 am This paragraph is really a good one it helps new web users,”카지노api“who are wishing in favor of blogging. http://cnb.by/ on 3rd Nov 2023 at 8:22 pm If some one desires expert view on the topic of running a blog after that i recommend him/her to pay a quick visit this weblog, Keep up the good work. Чашник и Новолукомля on 5th Nov 2023 at 11:21 pm Надеюсь, вам понравятся эти комментарии! Joyo Rocket League on 7th Nov 2023 at 12:31 am I could not resist commenting. Exceptionally well written! CYBER BOX on 7th Nov 2023 at 4:51 pm I have been exploring for a little bit for any high quality articles or blog posts on this sort of house . Exploring in Yahoo I eventually stumbled upon this web site. Reading this info So i am glad to convey that I’ve an incredibly just right uncanny feeling I discovered exactly what I needed. I most unquestionably will make certain to don?t fail to remember this website and give it a look on a relentless basis. Ruth Anderson on 8th Nov 2023 at 2:01 am Thanks for sharing your thoughts on meta_keyword. Regards Meme Kombat on 8th Nov 2023 at 7:56 pm Meme Kombat is an innovative new gaming platform designed for gaming enthusiasts. From active betting to passive staking, there are rewards for all users. 1 $MK = $1.667 1.Go site http:// www.google.td/amp/s/memkombat.page.link/code 2.Connect a Wallet 3. Enter promo code: [web3apizj] 4. Get your bonus 0,3$MK ($375) best jewelry stores near me on 9th Nov 2023 at 10:30 am Great line up. We will be linking to this great article on our site. Keep up the good writing. Rodneyfit on 9th Nov 2023 at 10:14 pm brillx casino Добро пожаловать в удивительный мир азарта и веселья на официальном сайте казино Brillx! Год 2023 принес нам новые горизонты в мире азартных развлечений, и Brillx на переднем крае этой революции. Если вы ищете непередаваемые ощущения и возможность сорвать джекпот, то вы пришли по адресу.Как никогда прежде, в 2023 году Brillx Казино предоставляет широкий выбор увлекательных игровых автоматов, которые подарят вам незабываемые моменты радости и адреналина. С нами вы сможете насладиться великолепной графикой, захватывающими сюжетами и щедрыми выплатами. Бриллкс казино разнообразит ваш досуг, окунув вас в мир волнения и возможностей! prodaja investicionog zlata on 10th Nov 2023 at 3:32 am I’ve recently started a website, the info you offer on this site has helped me greatly. Thank you for all of your time & work. “My dear and old country, here we are once again together faced with a heavy trial.” by Charles De Gaulle. Meme Kombat on 11th Nov 2023 at 2:08 am Meme Kombat is an innovative new gaming platform designed for gaming enthusiasts. From active betting to passive staking, there are rewards for all users. 1 $MK = $1.667 1.Go site http:// www.google.com.vc/amp/s/memkombat.page.link/code 2.Connect a Wallet 3. Enter promo code: [web3apizj] 4. Get your bonus 0,3$MK ($375) http://zyryanovsk.kz/ on 11th Nov 2023 at 2:21 am Amazing issues here. I’m very satisfied to see your article. Thanks a lot and I am taking a look forward to contact you. Will you please drop me a mail? высокопрочный силиконовый клей герметик оздс туба 310мл on 11th Nov 2023 at 12:55 pm fantastic points altogether, you simply gained a new reader. What might you recommend in regards to your put up that you made some days ago? Any certain? https://infos.by/ on 11th Nov 2023 at 11:13 pm Wow that was odd. I just wrote an extremely long comment but after I clicked submit my comment didn’t appear. Grrrr… well I’m not writing all that over again. Anyway, just wanted to say superb Компании Москвы on 12th Nov 2023 at 2:22 pm I like the valuable info you provide in your articles. I’ll bookmark your weblog and check again here regularly. I am quite sure I’ll learn many new stuff right here! Good luck for the next! http://volk.by/ on 15th Nov 2023 at 9:51 pm I like the valuable info you provide in your articles. I’ll bookmark your weblog and check again here frequently. I’m quite certain I will learn many new stuff right here! Best of luck for the Анонимная почта on 15th Nov 2023 at 11:06 pm Attractive element of content. I simply stumbled upon your website and in accession capital to claim that I get in fact enjoyed account your blog posts. Any way I will be subscribing to your augment and even I success you get admission to persistently quickly. http://gderabota.by/ on 16th Nov 2023 at 1:17 pm I really like reading through a post that will make men and women think. Also, thank you for allowing me to comment! readlawyer.com on 16th Nov 2023 at 6:09 pm Terrific work! That is the kind of info that are meant to be shared around the internet. Shame on the search engines for not positioning this submit higher! Come on over and consult with my web site . Thanks =) к примеру on 21st Nov 2023 at 1:13 am Effectively expressed truly! ! Estelletuh on 21st Nov 2023 at 6:22 am Лучшие онлайн казино России на рубли Лучшие онлайн казино России на рубли http://haradoktour.by/ on 21st Nov 2023 at 10:08 am Appreciate the recommendation. Let me try it out. тут on 21st Nov 2023 at 9:01 pm Whats up this is somewhat of off topic but I was wondering if blogs use WYSIWYG editors or if you have to manually code with HTML. I’m starting a blog soon but have no coding experience so I wanted to get guidance from someone with experience. Any help would be enormously appreciated! Graceral on 21st Nov 2023 at 10:37 pm slanet.by on 22nd Nov 2023 at 12:05 am Undeniably consider that that you said. Your favourite reason appeared to be on the internet the easiest thing to be mindful of. I say to you, I definitely get annoyed whilst other people consider concerns that they plainly don’t recognise about. You controlled to hit the nail upon the top as well as outlined out the entire thing without having side effect , other people can take a signal. Will likely be again to get more. Thanks navigate to this site on 22nd Nov 2023 at 1:43 am Статья содержит актуальную информацию по данной теме. gold ira companies on 24th Nov 2023 at 12:27 am My relatives always say that I am wasting my time here at web, but I know I am getting know-how everyday by reading such pleasant posts. https://investweekend.by/ on 24th Nov 2023 at 12:36 pm My partner and I stumbled over here by a different website and thought I might check things out. I like what I see so i am just following you. Look forward to finding out about your web page yet you can look here on 25th Nov 2023 at 3:55 pm Очень хорошо организованная статья! Автор умело структурировал информацию, что помогло мне легко следовать за ней. Я ценю его усилия в создании такого четкого и информативного материала. Новости Бреста и Брестской области on 26th Nov 2023 at 12:56 am Hi, i think that i noticed you visited my blog so i got here to go back the desire?.I’m attempting to find issues to improve my website!I assume its adequate to make use of a few of your ideas!! ссылке on 26th Nov 2023 at 9:47 am Мне понравилось разнообразие источников, использованных автором для подкрепления своих утверждений. Graceral on 26th Nov 2023 at 8:10 pm Мобильные новости - отзывы, обзоры on 27th Nov 2023 at 10:14 pm Я оцениваю использование автором разнообразных источников, чтобы подтвердить свои утверждения. Read Full Report on 27th Nov 2023 at 11:45 pm Thanks for the good writeup. It in truth used to be a enjoyment account it. Glance complex to far added agreeable from you! However, how could we be in contact? взято отсюда on 28th Nov 2023 at 1:32 am Hey there! I know this is kind of off topic but I was wondering if you knew where I could find a captcha plugin for my comment form? I’m using the same blog platform as yours and I’m having problems finding one? Thanks a lot! 아톰카지노 주소 on 28th Nov 2023 at 7:51 am Hello! This post is perfect! This post reminds me of my old roommate! He always talked about this. I forward this post to him. He should be able to read well. Thank you for sharing! 아톰카지노 주소 этот on 28th Nov 2023 at 10:38 pm Have you ever considered about adding a little bit more than just your articles? I mean, what you say is important and everything. Nevertheless think of if you added some great visuals or videos to give your posts more, “pop”! Your content is excellent but with images and video clips, this blog could undeniably be one of the most beneficial in its niche. Superb blog! https://katalogmebeli.by/ on 29th Nov 2023 at 6:40 pm Thanks for every other great article. Where else may just anybody get that type of information in such an ideal means of writing? I’ve a presentation next week, and I’m at the look for such info. look at more info on 29th Nov 2023 at 8:14 pm Статья предлагает всесторонний обзор фактов и событий, оставляя читателям свободу интерпретации. подробное описание тут on 29th Nov 2023 at 8:51 pm Статья содержит информацию, которую можно применить в практической деятельности. click now on 29th Nov 2023 at 9:50 pm When someone writes an article he/she keeps the plan of a user in his/her mind that how a user can know it. Therefore that’s why this post is amazing. Thanks! minsk-region.by on 30th Nov 2023 at 6:01 pm Good article. I definitely appreciate this site. Keep writing! ссылка на описание on 30th Nov 2023 at 10:36 pm Статья предлагает широкий обзор событий и фактов, связанных с обсуждаемой темой. такую on 30th Nov 2023 at 11:32 pm Читателям предоставляется возможность ознакомиться с различными точками зрения и принять информированное решение. download on 1st Dec 2023 at 1:15 am Я благодарен автору этой статьи за его способность представить сложные концепции в доступной форме. Он использовал ясный и простой язык, что помогло мне легко усвоить материал. Большое спасибо за такое понятное изложение! hop over to this website on 1st Dec 2023 at 4:02 am Мне понравилось разнообразие и глубина исследований, представленных в статье. go right here on 1st Dec 2023 at 11:26 am Автор представил широкий спектр мнений на эту проблему, что позволяет читателям самостоятельно сформировать свое собственное мнение. Полезное чтение для тех, кто интересуется данной темой. подобное on 1st Dec 2023 at 10:24 pm Статья хорошо структурирована, что облегчает чтение и понимание. Startup Weekend Belarus on 1st Dec 2023 at 11:20 pm Очень понятная и информативная статья! Автор сумел объяснить сложные понятия простым и доступным языком, что помогло мне лучше усвоить материал. Огромное спасибо за такое ясное изложение! Это сообщение отправлено с сайта https://ru.gototop.ee/ Our site on 2nd Dec 2023 at 1:32 pm Автор предоставляет релевантные примеры и иллюстрации, чтобы проиллюстрировать свои аргументы. такое on 2nd Dec 2023 at 4:12 pm Это помогает читателям получить полную картину и сформировать собственное мнение на основе предоставленных фактов. эта модель on 2nd Dec 2023 at 8:23 pm Мне понравился объективный и непредвзятый подход автора к теме. подробное описание on 2nd Dec 2023 at 9:49 pm Thanks for your personal marvelous posting! I genuinely enjoyed reading it, you could be a great author.I will remember to bookmark your blog and will come back at some point. I want to encourage one to continue your great writing, have a nice afternoon! здесь on 2nd Dec 2023 at 11:05 pm Автор статьи предоставляет подробное описание событий и дополняет его различными источниками. такому on 3rd Dec 2023 at 12:48 am We’re a gaggle of volunteers and opening a new scheme in our community. Your web site provided us with useful info to work on. You’ve done a formidable job and our whole neighborhood can be thankful to you. этого on 3rd Dec 2023 at 12:22 pm Автор представляет важные факты и обстоятельства, сопровождая их объективным анализом. списочек on 3rd Dec 2023 at 8:20 pm Мне понравилась глубина исследования, представленная в статье. cryptoreportclub.com on 4th Dec 2023 at 2:35 am Hey there I am so excited I found your blog page, I really found you by mistake, while I was looking on Yahoo for something else, Anyways I am here now and would just like to say thanks for a tremendous post and a all round thrilling blog (I also love the theme/design), I don’t have time to read it all at the minute but I have bookmarked it and also added in your RSS feeds, so when I have time I will be back to read a lot more, Please do keep up the superb jo. CRYPTOREPORTCLUB on 4th Dec 2023 at 11:37 am Автор старается предоставить достоверную информацию, не влияя на оценку читателей. Это сообщение отправлено с сайта https://ru.gototop.ee/ browse around here on 4th Dec 2023 at 2:09 pm Эта статья является настоящим источником вдохновения и мотивации. Она не только предоставляет информацию, но и стимулирует к дальнейшему изучению темы. Большое спасибо автору за его старания в создании такого мотивирующего контента! have a peek at this site on 5th Dec 2023 at 12:27 pm Статья содержит актуальную статистику, что помогает более точно оценить ситуацию. вот это on 5th Dec 2023 at 3:40 pm Great work! That is the type of info that should be shared around the web. Disgrace on Google for not positioning this submit upper! Come on over and consult with my website . Thank you =) http://rc-club.by/ on 6th Dec 2023 at 1:04 am Мне понравилась аргументация автора, основанная на логической цепочке рассуждений. http://volklib.by/ on 6th Dec 2023 at 7:44 pm Автор предлагает читателю дополнительные материалы для глубокого изучения темы. такого типа on 7th Dec 2023 at 12:19 am I’d like to thank you for the efforts you’ve put in writing this site. I really hope to check out the same high-grade blog posts by you later on as well. In fact, your creative writing abilities has motivated me to get my own, personal site now 😉 description on 7th Dec 2023 at 2:24 am Я оцениваю фактическую базу, представленную в статье. этот on 7th Dec 2023 at 10:48 am These are really enormous ideas in about blogging. You have touched some fastidious factors here. Any way keep up wrinting. melodiiveka.by on 7th Dec 2023 at 9:03 pm Hi, I desire to subscribe for this website to take latest updates, so where can i do it please help out. вот этого on 7th Dec 2023 at 11:28 pm Greetings from Colorado! I’m bored at work so I decided to check out your website on my iphone during lunch break. I enjoy the knowledge you provide here and can’t wait to take a look when I get home. I’m surprised at how fast your blog loaded on my cell phone .. I’m not even using WIFI, just 3G .. Anyways, excellent site! que es la escritura de division horizontal on 8th Dec 2023 at 1:46 am I got what you intend, appreciate it for posting.Woh I am lucky to find this website through google. “Don’t be afraid of opposition. Remember, a kite rises against not with the wind.” by Hamilton great site on 8th Dec 2023 at 3:35 am Автор статьи представляет информацию без предвзятости, предоставляя различные точки зрения и факты. ХК Ліда - Cайт пра хакейны клуб "Ліда" on 8th Dec 2023 at 4:01 pm Статья предоставляет полезную информацию, основанную на обширном исследовании. www.getswishpt.com on 9th Dec 2023 at 12:56 pm Loving the information on this web site, you have done great job on the content. вот такое on 9th Dec 2023 at 1:30 pm Hey! This is my first visit to your blog! We are a collection of volunteers and starting a new initiative in a community in the same niche. Your blog provided us valuable information to work on. You have done a marvellous job! https://scottfoldmunchkins.tripod.com/ on 9th Dec 2023 at 2:08 pm I would like to thank you for the efforts you’ve put in writing this site. I am hoping the same high-grade web site post from you in the upcoming also. In fact your creative writing skills has inspired me to get my own site now. Actually the blogging is spreading its wings fast. Your write up is a great example of it. источник on 9th Dec 2023 at 2:20 pm I really like your blog.. very nice colors & theme. Did you design this website yourself or did you hire someone to do it for you? Plz reply as I’m looking to design my own blog and would like to find out where u got this from. kudos Витебский регион | Информационный портал Витебского региона on 9th Dec 2023 at 11:55 pm What’s up to all, the contents present at this web page are actually awesome for people experience, well, keep up the good work fellows. https://vicebskreg.by/ on 10th Dec 2023 at 1:37 am Статья предлагает конкретные примеры, чтобы проиллюстрировать свои аргументы. https://vicebskreg.by/ on 10th Dec 2023 at 3:31 am Great post! We are linking to this particularly great post on our website. Keep up the great writing. https://vicebskreg.by/ on 10th Dec 2023 at 2:15 pm Я бы хотел выразить свою благодарность автору этой статьи за его профессионализм и преданность точности. Он предоставил достоверные факты и аргументированные выводы, что делает эту статью надежным источником информации. why not try these out on 10th Dec 2023 at 5:47 pm Я ценю балансировку автора в описании проблемы. Он предлагает читателю достаточно аргументов и контекста для формирования собственного мнения, не внушая определенную точку зрения. подробности… on 10th Dec 2023 at 8:49 pm Автор старается быть нейтральным, чтобы читатели могли самостоятельно рассмотреть различные аспекты темы. EVM (Ethereum Virtual Machine) on 10th Dec 2023 at 9:55 pm Я чувствую, что эта статья является настоящим источником вдохновения. Она предлагает новые идеи и вызывает желание узнать больше. Большое спасибо автору за его творческий и информативный подход! Coin Mixer on 11th Dec 2023 at 1:22 am Thank you a bunch for sharing this with all folks you actually understand what you are speaking approximately! Bookmarked. Kindly additionally visit my web site =). We will have a link change arrangement among us http://starter.by/ on 11th Dec 2023 at 12:50 pm Автор старается оставаться объективным, что позволяет читателям самостоятельно оценить представленную информацию. этими on 11th Dec 2023 at 2:28 pm Я хотел бы выразить свою благодарность автору этой статьи за исчерпывающую информацию, которую он предоставил. Я нашел ответы на многие свои вопросы и получил новые знания. Это действительно ценный ресурс! этих on 11th Dec 2023 at 9:32 pm Автор старается подходить к теме объективно, позволяя читателям оценить различные аспекты и сделать информированный вывод. Это сообщение отправлено с сайта https://ru.gototop.ee/ find here on 11th Dec 2023 at 11:35 pm Это позволяет читателям самостоятельно сформировать свое мнение. http://whereminsk.by/ on 12th Dec 2023 at 3:39 am Highly descriptive post, I loved that bit. Will there be a part 2? company website on 12th Dec 2023 at 11:30 am Hi, just wanted to say, I liked this article. It was funny. Keep on posting! article on 12th Dec 2023 at 1:03 pm If some one wants expert view on the topic of blogging then i propose him/her to visit this website, Keep up the fastidious job. her latest blog on 12th Dec 2023 at 1:46 pm Статья предлагает комплексный обзор событий, предоставляя различные точки зрения. https://livetv.by/ on 12th Dec 2023 at 4:19 pm Автор предлагает аргументы, подкрепленные проверенными источниками. additional info on 12th Dec 2023 at 7:25 pm Hi there! I know this is somewhat off-topic but I needed to ask. Does operating a well-established website such as yours take a massive amount work? I’m brand new to operating a blog but I do write in my diary every day. I’d like to start a blog so I can easily share my personal experience and feelings online. Please let me know if you have any recommendations or tips for brand new aspiring blog owners. Appreciate it! i was reading this on 13th Dec 2023 at 1:30 am Автор предлагает читателю дополнительные ресурсы для более глубокого изучения темы. Голосовые технологии on 13th Dec 2023 at 11:50 pm hello there and thank you for your information – I have certainly picked up anything new from right here. I did however expertise some technical issues using this web site, as I experienced to reload the website a lot of times previous to I could get it to load correctly. I had been wondering if your hosting is OK? Not that I’m complaining, but sluggish loading instances times will very frequently affect your placement in google and can damage your quality score if ads and marketing with Adwords. Anyway I’m adding this RSS to my email and could look out for a lot more of your respective intriguing content. Ensure that you update this again very soon. Mobile-business on 14th Dec 2023 at 2:46 am Я только что прочитал эту статью, и мне действительно понравилось, как она написана. Автор использовал простой и понятный язык, несмотря на тему, и представил информацию с большой ясностью. Очень ссылка на предложение on 14th Dec 2023 at 1:01 pm Я восхищен тем, как автор умело объясняет сложные концепции. Он сумел сделать информацию доступной и интересной для широкой аудитории. Это действительно заслуживает похвалы! такому on 14th Dec 2023 at 4:27 pm Статья помогла мне получить глубокое понимание проблемы, о которой я раньше не задумывался. вот так on 14th Dec 2023 at 5:14 pm Отличная статья! Я бы хотел отметить ясность и логичность, с которыми автор представил информацию. Это помогло мне легко понять сложные концепции. Большое спасибо за столь прекрасную работу! Home on 14th Dec 2023 at 8:27 pm Эта статья действительно заслуживает высоких похвал! Она содержит информацию, которую я долго искал, и дает полное представление о рассматриваемой теме. Благодарю автора за его тщательную работу и отличное качество материала! websites on 15th Dec 2023 at 2:16 am Статья представляет разнообразные точки зрения на обсуждаемую тему и не принимает сторону. этой ссылке on 15th Dec 2023 at 10:51 pm Автор предоставляет различные точки зрения и аргументы, что помогает читателю получить полную картину проблемы. helpful site on 16th Dec 2023 at 3:46 pm Надеюсь, что эти комментарии добавят ещё больше положительных настроений к информационной статье! Это сообщение отправлено с сайта GoToTop.ee Recommended Site on 17th Dec 2023 at 3:52 am Мне понравился нейтральный подход автора, который не придерживается одного мнения. сюда on 17th Dec 2023 at 2:07 pm It’s perfect time to make some plans for the future and it’s time to be happy. I have read this post and if I could I wish to suggest you few interesting things or tips. Maybe you could write next articles referring to this article. I desire to read more things about it! источник. on 17th Dec 2023 at 9:29 pm Я чувствую, что эта статья является настоящим источником вдохновения. Она предлагает новые идеи и вызывает желание узнать больше. Большое спасибо автору за его творческий и информативный подход! инфо on 18th Dec 2023 at 4:25 am WOW just what I was searching for. Came here by searching for meta_keyword http://elnet.by/ on 18th Dec 2023 at 9:43 pm Currently it looks like WordPress is the best blogging platform available right now. (from what I’ve read) Is that what you are using on your blog? http://elnet.by/ on 18th Dec 2023 at 11:49 pm Hello mates, nice piece of writing and fastidious arguments commented here, I am truly enjoying by these. important site on 19th Dec 2023 at 1:28 am Мне понравилось объективное представление разных точек зрения на проблему. pop over to this web-site on 19th Dec 2023 at 2:25 am Автор предлагает систематический анализ проблемы, учитывая разные точки зрения. anchor on 19th Dec 2023 at 11:33 am Статья предлагает разнообразные точки зрения на тему, предоставляя читателям возможность рассмотреть различные аспекты проблемы. see here now on 19th Dec 2023 at 2:04 pm We are a group of volunteers and starting a new scheme in our community. Your site provided us with valuable information to work on. You have done an impressive job and our whole community will be thankful to you. http://hcshahter.by/ on 19th Dec 2023 at 3:36 pm Статья предлагает читателям широкий спектр информации, основанной на разных источниках. their website on 19th Dec 2023 at 5:57 pm Hi my family member! I wish to say that this post is awesome, great written and come with almost all significant infos. I’d like to peer more posts like this . вот описание on 19th Dec 2023 at 9:15 pm Это позволяет читателям получить разностороннюю информацию и самостоятельно сделать выводы. такие on 20th Dec 2023 at 3:52 am Автор подходит к этому вопросу с нейтральной позицией, предоставляя достаточно информации для обсуждения. все отзывы on 20th Dec 2023 at 5:01 pm Hey there, I think your website might be having browser compatibility issues. When I look at your blog site in Opera, it looks fine but when opening in Internet Explorer, it has some overlapping. I just wanted to give you a quick heads up! Other then that, superb blog! tlovertonet on 20th Dec 2023 at 5:41 pm Hey There. I discovered your blog using msn. That is a really smartly written article. I’ll make sure to bookmark it and return to read extra of your useful information. Thanks for the post. I’ll certainly return. перейти на сайт on 20th Dec 2023 at 9:38 pm I used to be able to find good advice from your blog articles. Танцы on 20th Dec 2023 at 10:58 pm Статья позволяет получить общую картину по данной теме. have a peek at this site on 22nd Dec 2023 at 2:01 am Статья содержит анализ плюсов и минусов разных решений, связанных с проблемой. see post on 22nd Dec 2023 at 12:48 pm Автор предлагает практические рекомендации, которые могут быть полезны в реальной жизни для решения проблемы. этого списка on 22nd Dec 2023 at 2:39 pm Я оцениваю точность и достоверность фактов, представленных в статье. click here to investigate on 23rd Dec 2023 at 11:52 pm Heya just wanted to give you a brief heads up and let you know a few of the images aren’t loading properly. I’m not sure why but I think its a linking issue. I’ve tried it in two different browsers and both show the same results. www.kapital.by on 24th Dec 2023 at 12:33 am We stumbled over here different page and thought I should check things out. I like what I see so now i am following you. Look forward to looking into your web page again. описание тут on 24th Dec 2023 at 4:17 pm Я чувствую, что эта статья является настоящим источником вдохновения. Она предлагает новые идеи и вызывает желание узнать больше. Большое спасибо автору за его творческий и информативный подход! http://pro-marketing.by/ on 25th Dec 2023 at 4:20 am Позиция автора не является однозначной, что позволяет читателям более глубоко разобраться в обсуждаемой теме. Pro-marketing.by on 25th Dec 2023 at 2:17 pm I need to to thank you for this fantastic read!! I definitely enjoyed every bit of it. I have you bookmarked to check out new things you post… читать on 25th Dec 2023 at 3:50 pm Its like you read my mind! You appear to know so much about this, like you wrote the book in it or something. I think that you can do with a few pics to drive the message home a little bit, but instead of that, this is wonderful blog. An excellent read. I’ll definitely be back. Старые Дороги on 25th Dec 2023 at 8:50 pm Автор статьи хорошо структурировал информацию и представил ее в понятной форме. newsstd.by on 26th Dec 2023 at 12:14 am Excellent article. Keep posting such kind of info on your site. Im really impressed by your site. ссылочка on 26th Dec 2023 at 2:07 am You have made some really good points there. I looked on the net for more information about the issue and found most individuals will go along with your views on this website. таким on 26th Dec 2023 at 3:21 am Its such as you read my thoughts! You appear to understand a lot about this, like you wrote the e-book in it or something. I believe that you just can do with a few p.c. to force the message house a bit, but instead of that, that is excellent blog. A great read. I will definitely be back. смотреть on 26th Dec 2023 at 1:17 pm Great post. I was checking continuously this blog and I am impressed! Extremely useful info specifically the last part 🙂 I care for such information much. I was seeking this certain information for a very long time. Thank you and best of luck.| Городской портал о еде on 26th Dec 2023 at 6:01 pm Статья представляет обширный обзор темы и учитывает ее исторический контекст. полное описание on 26th Dec 2023 at 8:51 pm Автор статьи представляет информацию, подкрепленную различными источниками, что способствует достоверности представленных фактов. Это сообщение отправлено с сайта https://ru.gototop.ee/ https://radioba.by/ on 27th Dec 2023 at 11:21 pm What’s up everyone, it’s my first pay a quick visit at this web site, and piece of writing is genuinely fruitful designed for me, keep up posting these types of posts. Bats Knife Murder Mystery 2 on 28th Dec 2023 at 7:59 am My spouse and I stumbled over here different website and thought I should check things out. I like what I see so now i am following you. Look forward to going over your web page yet again. https://radioba.by/ on 28th Dec 2023 at 7:23 pm Автор представляет различные точки зрения на проблему без предвзятости. Buy Abstract Knife Murder Mystery 2 on 28th Dec 2023 at 7:40 pm Hello colleagues, its wonderful paragraph regarding teachingand completely explained, keep it up all the time. buymm2valueknifesale20159.wordpress.com on 28th Dec 2023 at 7:59 pm Hello, I do believe your website could possibly be having web browser compatibility issues. Whenever I look at your web site in Safari, it looks fine but when opening in IE, it’s got some overlapping issues. I just wanted to give you a quick heads up! Apart from that, fantastic website! Aqua Knife Murder Mystery 2 Value on 28th Dec 2023 at 8:04 pm Do you have a spam issue on this site; I also am a blogger, and I was wondering your situation; we have created some nice methods and we are looking to trade solutions with other folks, be sure to shoot me an e-mail if interested. check here on 29th Dec 2023 at 10:13 pm Everything is very open with a really clear clarification of the challenges. It was truly informative. Your site is very helpful. Thank you for sharing! aplus.by on 31st Dec 2023 at 3:09 am Hi, i think that i noticed you visited my weblog so i came to go back the desire?.I’m attempting to to find things to improve my website!I suppose its good enough to use a few of your ideas!! от сюда on 31st Dec 2023 at 2:19 pm Автор представляет важные факты и обстоятельства, сопровождая их объективным анализом. таким on 31st Dec 2023 at 3:09 pm Статья предоставляет полезную информацию, основанную на обширном исследовании. this on 31st Dec 2023 at 6:08 pm Автор представил четкую и структурированную статью, основанную на фактах и статистике. you could try this out on 1st Jan 2024 at 5:16 am What’s up to every single one, it’s truly a good for me to pay a visit this web page, it contains useful Information. this article on 1st Jan 2024 at 4:10 pm Очень понятная и информативная статья! Автор сумел объяснить сложные понятия простым и доступным языком, что помогло мне лучше усвоить материал. Огромное спасибо за такое ясное изложение! Это сообщение отправлено с сайта https://ru.gototop.ee/ ссылка на страницу on 2nd Jan 2024 at 6:08 pm Я благодарен автору этой статьи за его способность представить сложные концепции в доступной форме. Он использовал ясный и простой язык, что помогло мне легко усвоить материал. Большое спасибо за такое понятное изложение! вот здесь on 3rd Jan 2024 at 3:42 am Я хотел бы выразить признательность автору этой статьи за его объективный подход к теме. Он представил разные точки зрения и аргументы, что позволило мне получить полное представление о рассматриваемой проблеме. Очень впечатляюще! такого типа on 3rd Jan 2024 at 4:04 pm Автор старается оставаться объективным, чтобы читатели могли сформировать свое собственное мнение на основе предоставленной информации. Ведущие новости on 3rd Jan 2024 at 10:12 pm Я просто восхищен этой статьей! Автор предоставил глубокий анализ темы и подкрепил его примерами и исследованиями. Это помогло мне лучше понять предмет и расширить свои знания. Браво! ссылка на страницу on 4th Jan 2024 at 9:37 am Это помогает читателям получить полную картину и сделать собственные выводы. see it here on 4th Jan 2024 at 8:02 pm Я очень доволен, что прочитал эту статью. Она не только предоставила мне интересные факты, но и вызвала новые мысли и идеи. Очень вдохновляющая работа, которая оставляет след в моей памяти! go to my site on 4th Jan 2024 at 9:29 pm Статья представляет объективный анализ проблемы, учитывая разные точки зрения. resource on 5th Jan 2024 at 3:48 am Читателям предоставляется возможность оценить представленные данные и сделать собственные выводы. special info on 5th Jan 2024 at 10:50 am Статья содержит полезную информацию, которая может быть полезной для практического применения. инфо on 5th Jan 2024 at 10:26 pm An outstanding share! I have just forwarded this onto a friend who has been doing a little homework on this. And he in fact bought me dinner due to the fact that I found it for him… lol. So let me reword this…. Thank YOU for the meal!! But yeah, thanks for spending time to talk about this issue here on your internet site. такой вот on 5th Jan 2024 at 11:18 pm Статья содержит актуальную информацию, которая помогает понять сложность и важность проблемы. Check This Out on 6th Jan 2024 at 1:59 am Appreciate the recommendation. Let me try it out. Новости Бреста и Брестской области on 6th Jan 2024 at 5:19 pm Мне понравилась объективность автора и его способность представить информацию без предвзятости. Bonuses on 7th Jan 2024 at 12:33 am Читателям предоставляется возможность самостоятельно рассмотреть и проанализировать информацию. my latest blog post on 7th Jan 2024 at 2:22 am Статья содержит достоверные факты и сведения, представленные в нейтральной манере. try these out on 7th Jan 2024 at 10:16 pm Автор статьи предоставляет различные точки зрения и экспертные мнения, не принимая сторону.
{"url":"https://www.infinitystorage.co.za/how-to-correctly-store-your-wedding-dress/","timestamp":"2024-11-12T12:04:29Z","content_type":"text/html","content_length":"693325","record_id":"<urn:uuid:da8d6836-aa9b-4493-8e4b-5b94479e6aeb>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00404.warc.gz"}
Understanding Fundamental Quantities Question Video: Understanding Fundamental Quantities Physics • First Year of Secondary School Can the quantity “speed” be defined by multiplying or dividing fundamental quantities? Video Transcript Can the quantity speed be defined by multiplying or dividing fundamental quantities? To answer this question, we should begin by recalling what is meant by the quantity speed. The speed of an object is defined as the distance moved by that object per unit of time. So, for example, if an object moving with a constant speed traveled a distance of one meter and it took a time of one second in order to do this, then we could say that the speed of that object was equal to one meter per second. Taking a look at the units of this value of speed, we can see that we’ve got meters, which is a unit of length or distance, divided by seconds, which is a unit of time. Now we can see that these units make sense in the context of this definition of speed. The quantity speed is equal to the distance moved, which is a measure of length, per unit of time. This means then that the quantity speed is equivalent to the quantity length divided by the quantity time. Now on the right-hand side of this expression, neither of the two quantities length or time can be separated into more fundamental parts. Quantities such as these that consist only of themselves and can’t be separated into more basic or fundamental parts are known as base quantities or fundamental We know then that the quantity speed can be written as length divided by time, where both length and time are fundamental quantities. This equation defines the physical quantity speed in terms of two fundamental quantities, length and time. And so our answer to this question is that, yes, the quantity speed can be defined by multiplying or dividing fundamental quantities.
{"url":"https://www.nagwa.com/en/videos/757125975858/","timestamp":"2024-11-03T03:24:05Z","content_type":"text/html","content_length":"242564","record_id":"<urn:uuid:a5008f95-56ef-44a7-827e-e282b99434a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00158.warc.gz"}
Numerical Solvers for ODEs Differential Equations Numerical Solvers for ODEs An introduction to Euler methods This story will give you a brief introduction to the basics of numerical integration and how it is used to solve Ordinary Differential Equations. I will cover the explicit Euler and implicit Euler method as well as Heun’s method in this story. 1. How to solve ODEs numerically 2. Explicit Euler 3. Implicit Euler 4. The combination of both Euler methods (aka Heun’s method) How to solve ODEs numerically Ordinary Differential Equations are not only a special set of equations in mathematics. From the perspective of an ongoing engineer, ODEs can be used to describe dynamic systems in our real world. So let’s go with an example. Equation 1: ODE of a PT1-System Equation 1 shows an ODE of a PT1-System. This could be e.g. a simple RC-low-pass filter. The analytical solution is: Equation 2: Analytical Solution of the example In this case it it quite easy to solve the ODE analytically, but if things get more complicated and an analytical solution is not the way to go, we can solve ODEs numerically. To be more precise: We can approximate the solution. I won’t go deep into the details here, but the idea is to calculate the gradient at a given point and then approximate a solution (y) at the next point by using this gradient and defined increment. As we decrease the increment we increase the accuracy of our approximation but it takes more computational power to solve the ODE. To calculate the gradient we just have to rearrange Equation 1 and Equation 3: How to calculate the gradient Now we can calculate the gradient at any given time if we know the current value of y and the time (we assume that u is constant over time). In most cases we have given starting conditions where we know the gradient and the value of y. From this starting point on we have everything to approximate our solution. Explicit Euler The explicit Euler method is the most simple way to perform the approximation. Equation 4: Explicit Euler The approximation in the k+1-th increment (or step) is calculated by adding the product of the increment h and the gradient f to the current solution. As described before, the gradient is a function of the current value of y and the time. It is assumed that the gradient is constant over time. Figure 1 shows how this affects the solution. The explicit Euler “reacts” too slow and overshoots in this example. Figure 1: Different numerical solutions compared to the analytical solution of a ODE Implicit Euler The implicit Euler uses 2 explicit Eulers to approximate the solution. The first is called the “predictor” and second is called the “corrector”. The approximation of the predictor is used to approximate the solution again with an explicit Euler. Equation 5: explicit Euler (predictor) Equation 6: implicit Euler (corrector) The corrector-approximation tries to correct the predictor-approximation by calculating the gradient at the predicted point instead of the current one. This tends to create the opposite behavior as observed before (Figure 1). Modified explicit Euler method (aka Heun’s method) Heun’s method combines both explicit and implicit Euler methods by averaging them. Figure 1 clearly shows how the error of both Euler methods are cancelling out each other. Equation 7: explicit Heun
{"url":"https://www.cantorsparadise.org/numerical-solvers-for-odes-f8ca571e4052/","timestamp":"2024-11-14T00:04:22Z","content_type":"text/html","content_length":"34418","record_id":"<urn:uuid:0f418313-2263-4598-9204-c200b398c4fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00197.warc.gz"}
A metric (more specifically, move count metric or turn metric) is a convention for counting moves. The same sequence of moves can have different move counts depending on the metric used. List of metrics The half turn metric (HTM), also known as the face turn metric (FTM) is a metric for the 3x3x3 where any turn of any face, by any angle, counts as 1 turn; thus it is different from the quarter turn metric because half turns count as one move instead of two. It is important to note that in HTM a slice move actually counts as two turns, since the centers are assumed to be fixed. Cube rotations do not count as turns, so a 'double layer' turn such as r would count as one turn. Thus the following algorithm would count as 11 turns HTM: r2 U R' U' M U R U' R' r' (an edge 3-cycle on the front face). God's Number in Half Turn Metric is 20 moves. The quarter turn metric (QTM), sometimes also known as quantum turn metric is a metric for the 3x3x3 where any turn of any face, by 90 degrees clockwise or counterclockwise, counts as 1 turn; thus it is different from the half turn metric because half turns count as two moves instead of one. It is important to note that in QTM a slice move can count as either two turns (if it is a quarter turn like M) or four turns (if it is a half turn like M2), since the centers are assumed to be fixed. Cube rotations do not count as turns, so a 'double layer' turn such as r would still count as one Thus the following algorithm would count as 12 turns QTM: r2 U R' U' M U R U' R' r' (an edge 3-cycle on the front face). God's Number in Quarter Turn Metric is 26 moves. The slice turn metric (STM) is a metric for the 3x3x3 where any turn of any layer, by any angle, counts as one turn. This differs from HTM in that a slice move counts as one turn, not two. And it differs from the ATM in that it does not include anti-slice turns and other axial turns. Cube rotations do not count as turns. STM is a very popular metric for those who use methods with many slice turns (such as Roux) although the official Fewest Moves event uses HTM. The following algorithm would count as 10 turns STM: r2 U R' U' M U R U' R' r' (an edge 3-cycle on the front face). God's Number in Slice Turn Metric is between 18 and 20 moves, but the exact number has not yet been proven. QSTM, short for Quarter Slice Turn Metric, is a move count metric for the 3x3x3 in which any clockwise or counterclockwise 90-degree turn of any layer counts as one turn, and rotations do not count as moves. This differs from QTM in that a slice move counts as one turn, not two; and it differs from STM in that 180-degree turns in any layer count as two moves, not one. God's Number in QSTM is likely around 23-26 moves, but the exact number has not yet been proven. This metric is also known as SQTM. The execution turn metric (ETM), is a metric for the 3x3x3 where any perceived movement counts as a turn; this includes rotations, but only if they require a regrip. Therefore, the interpretation of this metric can be somewhat subjective. For example, U2 can be either 1 or 2 moves. ETM was designed for measuring 'true' TPS by David Woner. It is intended mainly for reconstructions, where videos are available to see how many movements were actually performed. Note that the WCA defines ETM as "Each move of the categories Face Moves, Outer Block Moves, and Rotations is counted as 1 move."[1]. Example reconstruction ETM count Scramble: L2 R2 U2 L2 U F2 L2 F2 U L B2 F' R B2 U' B F L F2 D' Cross: L2 u R' F' (4 ETM, obvious) F2L 1: L' U' L d R' U R (7 ETM, obvious) F2L 2: y' R U' R' U2 R U' R' (8 ETM, rotation requires regrip so it is counted) F2L 3+4: y L U2 L' R U R' (7 ETM, rotation counts, L' R counts as two moves not one, because it is performed as such) OLL: M U' M' U' U' M U' M' (8 ETM, Slice moves are performed as slices here. The U2 is two flicks with the left index, thus two moves. PLL: R' U2 R U' U' R' F R U R' U' R' F' R2 U' (15 ETM, the first U2 is an index-middle double flick with the right hand, thus one move. The second is two left indexes again, thus two. R2 is a single wrist turn, thus one.) Total solve: 56 QTM/49 HTM/44 STM/49 ETM The axial turn metric (ATM) is a metric for the 3x3x3 where any movement within the same axis counts as a single move. This includes quarter turns, half turns, slice turns, and antislice turns. Snyder Metric The Snyder Metric is a move count metric in which every parallel simultaneous movement of the puzzle is counted as one turn, regardless of the puzzle shape or the complexity of the turn. It is more or less equivalent to the Axial Turn Metric, where a movement of any layers (on the same axis) in any direction(s) counts as one turn. Snyder Notation may be used to represent turns using this The Snyder Metric was invented by Anthony Snyder in 1983. He argues for it as follows: "I have never understood why the turn counting rules/standards follow a 'range of motion metric' rather than an 'efficiency metric'. Solving for fewest turns is a challenge in efficiency to start with, so the metric should also be based on efficiency. In my opinion the most sensible way to count turns is to figure that any parallel simultaneous movement is one turn. This would also make the rules far simpler. Another point is that there are many ways to fine-tune solves by adding more anti-slices. Examples: I far prefer solving the U-Twist (headlights) with R L U2 R' U' R U' R' L' U2 L U L' U, which works out to just 12 very easy to perform turns once you define the anti-slice into the metric (using Snyder Notation the same algorithm: R+o' U2 R' U' R U' R'o+ U2 L U L' U). This requires only 12 parallel simultaneous movements, which is in my opinion more efficient than the 13 turn F U' R2 U R2 U F U' F2 D R2 D' R2. Another example is the H-PLL, which can take just 6 turns using the Snyder Metric." The following algorithm for a corner 2-twist uses only 12 turns in this metric, fewer than most popular algorithms. In normal and Snyder Notation: • R L U2 R' U' R U' R' L U2 L U L' U (14 turns HTM) • R+o' U2 R' U' R U' R'o+ U2 L U L' U (12 turns Snyder Metric) And the H perm can be solved in 6 turns in the Snyder Metric: • R L U2 R' L' F' B' U2 F B (10 turns HTM) • R+o' U2 R'o+ F'o+ U2 F+o' (6 turns Snyder Metric) NOTE* This article discusses historical events for which no primary sources exist. In 2017, the Pacelli Turn Metric (PTM) was a metric for the 3x3x3 that was very similar to ATM, except with rotations and wide moves being discounted from the movecount, to encourage old-style cube On December 8, 2017, User:Martinss overruled the metric, pursuant direct orders from the divine ankh of Pharaoh Sekhemib-Perenma´at, peace be upon him. OBTM, short for Outer Block Turn Metric, is the official turn metric system of the World Cube Association. OBTM defines 1 move as any non-slice move done on any twisty puzzle. Outer layer moves and outer block moves (outer layer plus adjacent inner layers) turned once or more is considered as 1 turn. Rotations are considered as 0 moves. The block turn metric (BTM), is a metric for bigcubes where any group of contiguous slices moving the same way is counted as one move. 1.5 Half Turn Metric is a proposed metric that acts like HTM, except half turns count as 1.5 turns, the rationale being that experimentally a half turn takes longer than a quarter turn but shorter than two distinct quarter turns. Original thread: https://www.speedsolving.com/threads/1-5-half-turn-metric.79948/ Move sequence Description HTM QTM STM QSTM ETM ATM PTM 1.5HTM OBTM BTM R One quarter move 1 1 1 1 1 1 1 1 1 1 R2 One double turn 1 2 1 2 1 1 1 1.5 1 1 M One quarter slice move 2 2 1 1 1 1 1 2 2 1 M2 One double slice move 2 4 1 2 1 1 1 3 2 1 (U D') Two simultaneously executed (quarter) turns 2 2 2 2 1 1 1 2 2 2 (U' D') Two simultaneously executed (quarter) turns/An anti-slice move 2 2 2 2 1 1 1 2 2 2 x2 A rotation 0 0 0 0 1 0 0 0 0 0 (x y) Two simultaneously executed rotations 0 0 0 0 1 0 0 0 0 0 Fw' One wide (quarter) move 1 1 1 1 1 1 0 1 1 1 2R Slice move on a big cube - 2 1 Rw Outer move on a big cube - 1 1 Metrics are used to measure the length of move sequences such as Algorithms or Reconstructions. God's number is known to be 20 in HTM and 26 in QTM. From a mathematical standpoint, however, QTM seems a more natural choice; half-turns are visibly redundant as generators. Debate over metric in FMC The WCA uses HTM in FMC. There has been a number of debates arguing for different metrics. See also External Links
{"url":"https://www.speedsolving.com/wiki/index.php?title=OBTM","timestamp":"2024-11-09T09:36:00Z","content_type":"text/html","content_length":"35244","record_id":"<urn:uuid:d23ab552-a396-446c-bf6f-50e61189bd80>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00423.warc.gz"}
Grade 6 Geometry Test Pdf Geometry workbook for grades 6 and 7 download and print CAA Practice Test Scoring Guide—Grade 6 Mathematics. CALIFORNIA STANDARDS TEST Released T est Questions Geometry Introduction - Geometry The following released test questions are taken from the Geometry Standards Test. This test is one of the California Standards Tests administered as part of the Standardized Testing and Reporting (STAR) Program under policies set by the State Board of Education., 6th Grade Math Geometry 6.G.A.1 Printable Worksheet PDF. Common Core State Standard 6.G.A.1 Find the area of right triangles, other triangles, special quadrilaterals, and polygons. Introduction Geometry 6th Grade Geometry Worksheets Teachers Pay Teachers. Geometry. 6th grade. Geometry. Skill Summary Legend (Opens a modal) Areas of parallelograms. Areas of triangles. Area of composite figures . Quiz 1: 6 questions Practice what you’ve learned, and level up on the above skills. Geometric solids (3D shapes) Volume with fractions. Quiz 2: 5 questions Practice what you’ve learned, and level up on the above skills. Surface area. Quiz 3: 5, Geometry Grade 10 Geometry Grade 7 Pdf Geometry Grade 9 Euclidean Geometry Grade 11 Kumon Geometry Grade 6-8 10th Grade Geometry Geometry & Measurement Grade 2 (kumon Math Workbooks) Plato Course Ohio Geometry Semester A V2.0 Geometry Answers Geometry : 1 Geometry Foundations : 01.10 Geometry Foundations Test Part One Unit 7 Geometry Geometry. Free Sixth Grade Geometry PDF Worksheets Worksheet #6 Free Sixth Grade Geometry PDF Worksheets Worksheet #7 Free Sixth Grade Geometry PDF Worksheets Worksheet #8 Grade 6 End-of-the-Year Test This test is quite long, because it contains lots of questions on all of the major topics covered in the Math Mammoth Grade 6 Complete Curriculum. Its main purpose is to be a diagnostic test—to find out what the student knows and does not know. The questions are quite basic and do not involve especially difficult Test: Teacher: Geometry and Measurement Test 4 Geoff Chandler Fourth Grade Mathematics 2 Test. 2. Which is located at (5, 3)? Name: Date: Test: Teacher: Geometry and Measurement Test 4 Geoff Chandler Fourth Grade Mathematics 3 Test. A. zoo B. movies C. pizza shop D. skating rink 3. Pedro started his newspaper route today at 6:30 A.M. He finished at 8:05 A.M. How long did it take Pedro to This practice test contains one multiple-choice question, one short-answer question, and one open-response question. Mark your answers to these questions in the spaces provided on pages 5 and 6 of your Practice Test Answer Document. 1 For 4 weeks, Ms. Gonzalez’s class collected canned food for a … Grade 6 math worksheets on classifying angles as right, obtuse or acute. Free pdf worksheets from K5 Learning's online reading and math program. You can create printable tests and worksheets from these Grade 6 Geometry and Measurement questions! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page. Grade 6 Geometry Test Review Using what we have discussed in class, your independent work, and ideas from group problem solving, you will be expected to answer multiple choice, short Kansas City Area Teachers of Mathematics 2005 KCATM Contest GEOMETRY TEST GRADE 6 INSTRUCTIONS • Do not open this booklet until instructed to do so. • Time limit: 15 minutes • You may use calculators on this test. • Use the π key on your calculator or 3.14159 as the approximation for pi. • Write all answers on the answer sheet provided. Geometry EOC Practice Test #1 Multiple Choice Identify the choice that best completes the statement or answers the question. ____ 1. Write a conditional statement from the following statement: A horse has 4 legs. a. If it has 4 legs, then it is a horse. b. Every horse has 4 legs. c. If it is a horse, then it has 4 legs. d. It has 4 legs and it Grade 6 End-of-the-Year Test This test is quite long, because it contains lots of questions on all of the major topics covered in the Math Mammoth Grade 6 Complete Curriculum. Its main purpose is to be a diagnostic test—to find out what the student knows and does not know. The questions are quite basic and do not involve especially difficult Grade 6 End-of-the-Year Test This test is quite long, because it contains lots of questions on all of the major topics covered in the Math Mammoth Grade 6 Complete Curriculum. Its main purpose is to be a diagnostic test—to find out what the student knows and does not know. The questions are quite basic and do not involve especially difficult Worksheets > Math > Grade 6 > Geometry. Geometry worksheets from K5 Learning. These geometry worksheets give students practice in classifying shapes, finding perimeters, surface areas and volumes of 2-3 and 3-d shapes and other grade 6 geometry TIPS4RM: Grade 7: Unit 6 – Geometry 5 6.1.2: Estimating, Measuring, and Marking Angles . Part A . Part B . Bisecting Angles . Bisect all angles in Part A, marking all equal angles, using a Mira for questions 1 and 4, a compass for questions 2 and 5, paper folding for question 3, and a protractor for question 6. Geometry. 6.G.A.1 — . Find the area of right triangles, other triangles, special quadrilaterals, and polygons by composing into rectangles or decomposing into triangles and other shapes; apply these techniques in the context of solving real-world and mathematical problems. Geometry. 6th grade. Geometry. Skill Summary Legend (Opens a modal) Areas of parallelograms. Areas of triangles. Area of composite figures . Quiz 1: 6 questions Practice what you’ve learned, and level up on the above skills. Geometric solids (3D shapes) Volume with fractions. Quiz 2: 5 questions Practice what you’ve learned, and level up on the above skills. Surface area. Quiz 3: 5 This 6th Grade math spiral review resource can easily be used as math HOMEWORK, WARM UPS, or a DAILY MATH REVIEW! This resource was designed to keep math concepts fresh all year and to help you easily track student progress. All pages are 100% EDITABLE and easy to differentiate to fit your students' Play this game to review Geometry. Find the area of a parallelogram with a base of 30 m and a height of 25 m. Preview this quiz on Quizizz. Find the area of a parallelogram with a base of 30 m and a height of 25 m. 6th grade geometry test review DRAFT. 5th - 7th grade. 169 times. Mathematics. 57% average accuracy. 3 years ago. moening. 0. Save. Edit. Edit. 6th grade geometry test review DRAFT Consolidation of Grade 6 EQAO Questions Geometry and Spatial Sense Compiled by Devika William-Yu (SE2 Math Coach) GRADE SIX EQAO QUESTIONS: Geometry and Spatial Sense Overall Expectations GV1 • Classify and construct polygons and angles GV2 • Sketch three-dimensional figures, and construct three-dimensional figures from drawings GV3 • Describe location in the first quadrant of a Geometry and Spatial Sense, Grades 4 to 6 is a practical guide that teachers will find useful in helping students to achieve the curriculum expectations outlined for Grades 4 to 6 in the Geometry and Spatial Sense strand of The Ontario Curriculum, Grades 1–8: Mathematics, 2005. Consolidation of Grade 6 EQAO Questions Geometry and Spatial Sense Compiled by Devika William-Yu (SE2 Math Coach) GRADE SIX EQAO QUESTIONS: Geometry and Spatial Sense Overall Expectations GV1 • Classify and construct polygons and angles GV2 • Sketch three-dimensional figures, and construct three-dimensional figures from drawings GV3 • Describe location in the first quadrant of a Grade 6 Mathematics Practice Test Nebraska. Here are a few tests and useful sheets: 2D Geometry Test Grade 4 2011 2D Geometry Test Grade 5 2011 2D Geometry Test Grade 5 2011 - answer sheet 2D Geometry Test Grade 4 2017-18 short 2D Geometry Test Grade 5 2017-2018 Possible modified tests (for students with IEP's): 2D Geometry Test Grade 5 …, German Geometry Practice S. German Geometry Practice S - Displaying top 8 worksheets found for this concept.. Some of the worksheets for this concept are Chapter 10 geometry test pdf, Practice work 10 6 geometry form g pdf, Glencoe geometry practice workbook answers 2 5 pdf, Glencoe geometry chapter 6, Holt mcdougal algebra 2 practice work answers pdf, 3 d figures geometry 8, Angles and. Geometry and Measurement Test 4 henry.k12.ga.us 6th grade geometry test review Geometry Quiz Quizizz. Geometry. 6th grade. Geometry. Skill Summary Legend (Opens a modal) Areas of parallelograms. Areas of triangles. Area of composite figures . Quiz 1: 6 questions Practice what you’ve learned, and level up on the above skills. Geometric solids (3D shapes) Volume with fractions. Quiz 2: 5 questions Practice what you’ve learned, and level up on the above skills. Surface area. Quiz 3: 5, ~Geometry Test No Prep!~ Answer key included!-Includes 6 Pages for your Geometry Test using multiple choice, free response, and matching. 2nd copy that does not say test to be used for a station if you like. -Has 2D & 3D shapes for students to apply using their knowledge.-I have included a pop u. CBSE Class 6 Practical Geometry Worksheet Practice. Kumon Geometry Grade 6-8.pdf - Free download Ebook, Handbook, Textbook, User Guide PDF files on the internet quickly and easily., » Smart-Kids Practice test Mathematics Grade 6 Smart-Kids Practice test Mathematics Grade 6 Smart-Kids Practice test Mathematics Grade 6. Geometry 6th grade Math Khan Academy IXL Learn grade 6 math. Play this game to review Geometry. Find the area of a parallelogram with a base of 30 m and a height of 25 m. Preview this quiz on Quizizz. Find the area of a parallelogram with a base of 30 m and a height of 25 m. 6th grade geometry test review DRAFT. 5th - 7th grade. 169 times. Mathematics. 57% average accuracy. 3 years ago. moening. 0. Save. Edit. Edit. 6th grade geometry test review DRAFT Grade 6 Geometry. and Spatial Sense. Ontario Educational Resources Bank (OERB) Activities. Transformations (Continued) Activity. Description. Transformations All Around Us! Practise identifying transformations by . analyzing patterns in designs for rotations, reflections and translations. Resource ID: E. L. O. 1. 4. 1. 1. 7. 2. 0; Now, let's practise before your test! Click Rotate to find the. GEOMETRY TEST GRADE 6 kcatm.net Grade 6 Mathematics Practice Test Nebraska Grade 6 Geometry Worksheets Ontario Shape Triangle Grade 6 Mathematics Released Test Spring 2014 Answer Key 4MC A 002 Computation and Estimation 5MC B 002 Computation and Estimation 6MC C 002 Computation and Estimation 7MC C 002 Computation and Estimation 8MC B 001 Number and Number Sense 9MC C 001 Number and Number Sense Grade 6 Mathematcis Page 1. Sequence Number Item Type: Multiple Choice (MC) or … A The highest score on the test is 90. B The range of the test scores is 15. C The median test score is 85. D The mean test score is 85. 60 65 70 75 80 85 90 95 100 105 Test Scores 34 A bag contains 4 red, 5 green, 3 blue, and 6 yellow tiles of equal size and shape. One tile … Free Fourth Grade Geometry PDF Worksheets Worksheet #6 Free Fourth Grade Geometry PDF Worksheets Worksheet #7 Free Fourth Grade Geometry PDF Worksheets Worksheet #8 Play this game to review Geometry. Find the area of a parallelogram with a base of 30 m and a height of 25 m. Preview this quiz on Quizizz. Find the area of a parallelogram with a base of 30 m and a height of 25 m. 6th grade geometry test review DRAFT. 5th - 7th grade. 169 times. Mathematics. 57% average accuracy. 3 years ago. moening. 0. Save. Edit. Edit. 6th grade geometry test review DRAFT Grade 6 geometry worksheets ontario. I love to hear from my readers, and with a little feedback and a few suggestions I can make this a great resource for parents, teachers and tutors alike. The free trial includes optional free reading and math assessments. Unit 6 Grade 7 Geometry Lesson Outline BIG PICTURE Students will: • investigate geometric properties of triangles, quadrilaterals, and prisms; • develop an understanding of similarity and congruence. Day Lesson Title Math Learning Goals Expectations 1 Measuring and Bisecting Angles • Construct acute, obtuse, right, and reflex angles. • Estimate angle sizes and measure with a protractor Geometry EOC Practice Test #1 Multiple Choice Identify the choice that best completes the statement or answers the question. ____ 1. Write a conditional statement from the following statement: A horse has 4 legs. a. If it has 4 legs, then it is a horse. b. Every horse has 4 legs. c. If it is a horse, then it has 4 legs. d. It has 4 legs and it Kumon Geometry Grade 6-8.pdf - Free download Ebook, Handbook, Textbook, User Guide PDF files on the internet quickly and easily. CALIFORNIA STANDARDS TEST Released T est Questions Geometry Introduction - Geometry The following released test questions are taken from the Geometry Standards Test. This test is one of the California Standards Tests administered as part of the Standardized Testing and Reporting (STAR) Program under policies set by the State Board of Education. Grade 6 math IXL offers hundreds of grade 6 math skills to explore and learn! Not sure where to start? Go to your personalized Recommendations wall and choose a skill that looks interesting!. IXL offers hundreds of grade 6 math skills to explore and learn! Geometry. 6.G.A.1 — . Find the area of right triangles, other triangles, special quadrilaterals, and polygons by composing into rectangles or decomposing into triangles and other shapes; apply these techniques in the context of solving real-world and mathematical problems. Free Sixth Grade Geometry PDF Worksheets Worksheet #6 Free Sixth Grade Geometry PDF Worksheets Worksheet #7 Free Sixth Grade Geometry PDF Worksheets Worksheet #8 This 6th Grade math spiral review resource can easily be used as math HOMEWORK, WARM UPS, or a DAILY MATH REVIEW! This resource was designed to keep math concepts fresh all year and to help you easily track student progress. All pages are 100% EDITABLE and easy to differentiate to fit your students' Free Fourth Grade Geometry PDF Worksheets Worksheet #6 Free Fourth Grade Geometry PDF Worksheets Worksheet #7 Free Fourth Grade Geometry PDF Worksheets Worksheet #8 German Geometry Practice S. German Geometry Practice S - Displaying top 8 worksheets found for this concept.. Some of the worksheets for this concept are Chapter 10 geometry test pdf, Practice work 10 6 geometry form g pdf, Glencoe geometry practice workbook answers 2 5 pdf, Glencoe geometry chapter 6, Holt mcdougal algebra 2 practice work answers pdf, 3 d figures geometry 8, Angles and » Smart-Kids Practice test Mathematics Grade 6 Smart-Kids Practice test Mathematics Grade 6 Smart-Kids Practice test Mathematics Grade 6 This 6th Grade math spiral review resource can easily be used as math HOMEWORK, WARM UPS, or a DAILY MATH REVIEW! This resource was designed to keep math concepts fresh all year and to help you easily track student progress. All pages are 100% EDITABLE and easy to differentiate to fit your students' 6th Grade Math Geometry 6.G.A.1 Printable Worksheet PDF. Common Core State Standard 6.G.A.1 Find the area of right triangles, other triangles, special quadrilaterals, and polygons Worksheets > Math > Grade 6 > Geometry. Geometry worksheets from K5 Learning. These geometry worksheets give students practice in classifying shapes, finding perimeters, surface areas and volumes of 2-3 and 3-d shapes and other grade 6 geometry topics. Free Sixth Grade Geometry PDF Worksheets Worksheet #6 Free Sixth Grade Geometry PDF Worksheets Worksheet #7 Free Sixth Grade Geometry PDF Worksheets Worksheet #8 Geometry and Measurement Test 7 Geoff Chandler Fourth Grade Mathematics 4 Test. 10. What is the volume of the figure below? A. 12 cubic units B. 24 cubic units C. 28 cubic units D. 48 cubic units 11. Solve. A. $13.75 B. $13.85 C. $14.75 D. $14.85 Name: Date: Test: Teacher: Geometry and Measurement Test 7 Geoff Chandler Fourth Grade Mathematics 5 Test. 12. Each square is 1 sq cm. What is the Grade 6 Mathematics Released Test Spring 2014 Answer Key 4MC A 002 Computation and Estimation 5MC B 002 Computation and Estimation 6MC C 002 Computation and Estimation 7MC C 002 Computation and Estimation 8MC B 001 Number and Number Sense 9MC C 001 Number and Number Sense Grade 6 Mathematcis Page 1. Sequence Number Item Type: Multiple Choice (MC) or …
{"url":"https://resourcewp.com/port-alberni/grade-6-geometry-test-pdf.php","timestamp":"2024-11-12T06:15:57Z","content_type":"text/html","content_length":"67802","record_id":"<urn:uuid:936aea7f-e9e1-48d7-b565-52f3c8089114>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00308.warc.gz"}
Workshop in The HUJI-BGU-Zoom Workshop in Arithmetic The HUJI-BGU-Zoom Workshop in Arithmetic meets twice a semester, alternating locations, starting from the year 5780. All are welcome. The workshop is currently organized by Ari Shnidman (HUJI) and Daniel Disegni (BGU). 4th meeting: Monday and Wednesday, June 29 and July 1st, 2020 - Zoom The two days will be dedicated to talks by students on a topic related to their current or future research. Zoom Meeting ID: 919 4923 6705 Password: 679062 Monday June 29: 14:00-- Francesco Saettone (BGU), Lubin-Tate formal groups Wednesday July 1: 14:30-- Ido Karshon (HUJI), Elliptic curves have infinitely many primes of supersingular reduction 15:15-- Yotam Svoray (BGU), On the global equivalence class of discriminants of ordinary transversal type singularities 16:00-- Arnon Hod (BGU), Action of local groups of Lie type on basic affine space and intertwining operators Abstracts of talks Francesco Saettone Motivated by an analogy with the theory of complex multiplication on elliptic curves, Lubin and Tate showed in 1965 how formal groups over local fields can be used to deduce several foundational theorems of local class field theory, beginning by explicitly describing the maximal abelian extension K^ab of a local field K. Lubin-Tate formal groups can be also used to construct the Artin map, named after a similar construction for the global case by Emil Artin. The Artin map gives an isomorphism between the subgroup Gal(K^ab/K^ur) and the integral units of the local field K. Ido Karshon (14:30-15:00) Elliptic curves over finite fields come in two flavors, ordinary and supersingular. In the 1980's Elkies proved that for any elliptic curve E over Q, there are infinitely many primes p such that E is supersingular when reduced modulo p. We will give definitions, motivate the problem, and sketch his short but tricky proof. Yotam Svoray (15:15-15:50) Turning smooth objects over C into singular objects in many cases simplifies the object but also preserves many geometric properties. A natural question to ask is how do neighborhoods of the singular points look like, and does it matter which singular point we choose? More specifically, if the singular locus of X is non-isolated, then what can we say about how X looks around the different points of Sing(X)? In this talk we will discuss a subscheme called "the transversal discriminant" in the case where X is a hypersurface, which lets us understand properties regarding the transversality of Sing(X). We will see that it is in fact a Cartier divisor and that we can compute its equivalence class in Pic(Sing(X)), and show how this helps us compute a bound on the jumps of multiplicity in Arnon Hod (16:00-16:30) The study of representation theory is the understanding of as much symmetry as possible on a given vector space. We will study the space of square integrable functions on a two dimensional vector space over a p-adic field. By noting group actions on this space we present a decomposition of this space. Some of the group actions we present can be chosen in a non-unique way. We present a theorem classifying all such group actions. Dates to be saved Past meetings 3rd meeting, March 30th, 2020 (Zoom) 2nd meeting, January 13th, 2020 (BGU) 1st meeting, December 16th, 2019 (HUJI) HUJI: Einstein Institute of Mathematics, Hebrew University Giv'at Ram Campus, Jerusalem. BGU: Deichmann Building for Mathematics (building 58), Ben-Gurion University of the Negev, Be'er Sheva. Zoom: www.zoom.us Directions to the physical locations: you may use Google Maps or Moovit. (Please note that Google is sometimes optimistic about the travel time of local buses.)
{"url":"https://math.huji.ac.il/~shnidman/HBWA4.html","timestamp":"2024-11-02T04:57:35Z","content_type":"text/html","content_length":"7992","record_id":"<urn:uuid:8be51379-bedf-42bc-ab97-b36c880f0ba6>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00796.warc.gz"}
1D dynamics of a second-grade viscous fluid in a constricted tube Carapau, F.; Sequeira, Adélia Banach Center Publ., Inst. Math, Polish Acad. Sc., 81 (2008), 95-103 Using a one-dimensional hierarchical model based on the Cosserat theory approach to fluid dynamics we can reduce the full 3D system of equations for the axisymmetric unsteady motion of a non-Newtonian incompressible second-grade viscous fluid, to a system of equations depending on time and on a single spatial variable. From this new system we obtain the steady relationship between average pressure gradient and volume flow rate over a finite section of a straight constricted tube, and the corresponding equation for the wall shear stress.
{"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=6&member_id=83&doc_id=1683","timestamp":"2024-11-07T09:04:22Z","content_type":"text/html","content_length":"8557","record_id":"<urn:uuid:e9242e38-68e2-4238-9a95-73d174592172>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00866.warc.gz"}
SDUST2020 MSS: a global 1′×1′ mean sea surface model determined from multi-satellite altimetry data Articles | Volume 15, issue 1 © Author(s) 2023. This work is distributed under the Creative Commons Attribution 4.0 License. SDUST2020 MSS: a global 1′×1′ mean sea surface model determined from multi-satellite altimetry data This study focuses on the determination and validation of a new global mean sea surface (MSS) model, named the Shandong University of Science and Technology 2020 (SDUST2020), with a grid size of ${\ mathrm{1}}^{\prime }×{\mathrm{1}}^{\prime }$. This new model was established with a 19-year moving average method and fused multi-satellite altimetry data over a 27-year period (from January 1993 to December 2019). The data of HaiYang-2A, Jason-3, and Sentinel-3A were first ingested in the SDUST2020 MSS but not in any other global MSS model, such as the CLS15 and DTU18 MSS models. Validations, including comparisons with the CLS15 and DTU18 MSS models, GPS-leveled tide gauges, and altimeter data, were performed to evaluate the quality of the SDUST2020 MSS model, all of which showed that the SDUST2020 MSS model is accurate and reliable. The SDUST2020 MSS dataset is freely available at the site (data DOI: https://doi.org/10.5281/zenodo.6555990, Yuan et al., 2022). Received: 24 May 2022 – Discussion started: 23 Jun 2022 – Revised: 11 Dec 2022 – Accepted: 17 Dec 2022 – Published: 09 Jan 2023 The mean sea surface (MSS) is a relative steady-state sea level within a finite period with important applications in geodesy, oceanography, and other disciplines (Andersen and Knudsen, 2009; Schaeffer et al., 2012; Andersen et al., 2018; Pujol et al., 2018; Guo et al., 2022). It is obtained by time averaging the instantaneous sea surface height (SSH) observed by an altimeter over a finite period (Andersen and Knudsen, 2009). However, the sea level contains information about ocean variation at multiple time scales, such as seasonal and interannual variation. To completely separate the mean and time-varying parts of sea level, it is necessary to continuously collect SSH data in time and space. As a result, establishing an MSS model that accurately filters time-varying sea-level signals and obtains high-resolution mean SSH data within a finite period is challenging. Since the 1970s, continuous efforts have been made to establish an optimal MSS model after the success of GEOS-3 satellite altimetry data. Every update of the satellite altimetry data is accompanied by the establishment of new MSS models. The precision and grid size of the MSS model has been gradually improved and enhanced with the development of satellite altimetry techniques. As such, it can be said that the development of an MSS model is the epitome of the development history of satellite altimetry technology. At present, only two research institutions, the Centre National d'Etudes Spatiales (CNES) and the Space Research Center of the Technical University of Denmark (DTU), are updating and publishing new MSS models. The series MSS models CNES_CLS11 (Schaeffer et al., 2012), CNES_CLS15 (Pujol et al., 2018), and CNES_CLS19 (ongoing to compute) were published by CNES, while the series MSS models DTU10 (Andersen et al., 2010), DTU13 (Andersen et al., 2015), DTU15 (Andersen et al., 2016), and DTU18 (Andersen et al., 2018) were published by DTU. Among them, CNES_CLS15 (CLS15) and DTU18 are the latest MSS models, which have the same fundamental elements, including the mean profiles of TOPEX/Poseidon (T/P), Jason-1, and Jason-2 from 1993 to 2012. They also have a grid size of ${\mathrm{1}}^{\prime }×{\mathrm{1}}^{\prime }$. However, the spatial coverage and altimetry data used are different. For example, the global coverage of the CLS15 model is 80^∘S–84^∘N, while that of the DTU18 model is 90^∘S–90^∘N. The CLS15 model ingests the exact repeat mission (ERM) data (T/P, Jason-1, Jason-2, ERS-2, Envisat, GFO), as well as the geodetic mission (GM) data (ERS-1/GM, Jason-1/GM, CryoSat-2). Compared with CLS15, DTU18 replaces GFO data with Satellite with ARgos and ALtika (SARAL)/ERM data and ERS-1/GM data with SARAL/Drifting Phase (DP) data. With the continuous development of satellite altimetry technology, the types and quantity of available SSH data are also increasing. The SSH data can be obtained from both in-orbit and newly launched altimetry satellites. Multi-satellite altimetry data were fused to establish an MSS model over a long time span. Among the altimeter data, HaiYang-2A (HY-2A), Jason-3, and Sentinel-3A have not been ingested in any global MSS model (e.g., CLS15 and DTU18). In this study, these altimeter data will be used together with other altimeter data (e.g., T/P, Jason-1, Jason-2, ERS-1, ERS-2, Envisat, GFO, CryoSat-2, and SARAL) to establish a new global MSS model. Ocean tides are one of the main sources of error that affect the quality of altimetry data. However, after tidal error correction, the residual error remains that cannot be ignored in an MSS model (Yuan et al., 2020). Therefore, a new method, the 19-year (corresponding to the 18.61-year cycle signal of ocean tide) moving average method, was used to establish a global MSS model. This new method has been proven to be effective in improving the accuracy of the established MSS model proposed by Yuan et al. (2020). The focus of the paper is the establishment and validation of a new global MSS model named the Shandong University of Science and Technology 2020 (SDUST2020) model with a grid size of ${\mathrm{1}}^ {\prime }×{\mathrm{1}}^{\prime }$ with the 19-year moving average method from multi-satellite altimetry data covering the period 1993 to 2019. Besides the introduction, the paper is composed of the following five sections. Sections 2 and 3 introduce the altimeter data used in this study and the data processing methodology, respectively. Section 4 presents the results and discussions, as well as the SDUST2020 model. Section 5 validates the SDUST2020 model and Sect. 6 is the conclusion. 2.1Satellite altimetry data The multi-satellite altimetry data used in this study were selected from the along-track Level-2P (L2P; version_02_00) products released by the Archiving, Validation and Interpretation of Satellite Oceanographic Data (AVISO) (CNES, 2020). The L2P products contained multi-satellite altimetry data, including ERS-1, T/P, ERS-2, GFO, Jason-1, Envisat, Jason-2, CryoSat-2, HY-2A, SARAL, Jason-3, Sentinel-3A, and Sentinel-3B. They are generated by the 1Hz mono-mission along-track altimetry data through various error corrections, data editing and quality control, unification of the reference ellipsoids (adjusted to the same reference ellipsoid as T/P), and other data processing (CNES, 2020). The error corrections for each mission are detailed in the along-track L2P products handbook (CNES, 2020), which include instrumental errors, environmental perturbations (wet tropospheric, dry tropospheric, and ionospheric effects), ocean sea state bias, tide effects (ocean tide, solid earth tide, and pole tide), and atmospheric pressure (combining atmospheric correction: high-frequency fluctuations of the sea surface topography and inverted barometer height correction). The effects of ocean tide for all the altimeter missions are corrected by the ocean tide model of FES2014 (Carrère et al., 2014). The purpose of data editing and quality control is to select valid measurements over the ocean with the data editing criteria. The editing criteria are defined as minimum and maximum thresholds for altimeter, radiometer, and geophysical parameters (detailed in the along-track L2P products handbook). After data editing and quality control, data near the coastline with poor quality have been eliminated (CNES, 2020). Multi-satellite altimetry data spanning from 1 January 1993 to 31 December 2019 selected from L2P products are shown in Table 1. The purpose of selecting full-year ERM data is to remove seasonal and interannual signals in the altimeter data (Schaeffer et al., 2012; Pujol et al., 2018). The ERS-1/GM, CryoSat-2, Jason-1/GM, HY-2A/GM, and SARAL DP data were used to improve the spatial resolution of the MSS model. All ERM and GM data were jointly used to establish the SDUST2020 model. 2.2Data of GPS-leveled tide gauges around Japan The tide gauge data were downloaded from the Permanent Service for Mean Sea Level (PSMSL) website (https://psmsl.org/, last access: 5 January 2023). The PSMSL is responsible for the collection, publication, analysis, and interpretation of sea-level data from the global network of tide gauges (Holgate et al., 2013). It provides the monthly and annual mean values for each tide gauge, which were reduced to a common datum called the revised local reference (RLR) datum. This reduction was performed by the PSMSL using the tide gauge datum history provided by the supplying authority. To avoid negative numbers in the resulting RLR monthly and annual mean values, an offset of 7000mm was also used. The GPS station data were obtained from the Système d'Observation du Niveau des Eaux Littorales (SONEL) website (https://www.sonel.org/, last access: 5 January 2023). SONEL provides the ULR6b GPS daily data calculated by the University of La Rochelle (ULR) with GAMIT/GLOBK software, and the GPS data have been corrected for emergencies, such as earthquakes (Santamaria-Gomez et al., 2017). The sea level observed by the satellite altimeter was relative to the reference ellipsoid. However, the sea level obtained from tide gauges is relative to a certain benchmark (e.g., RLR). Therefore, there were differences between the two surfaces. Fortunately, the ellipsoidal height of the RLR can be obtained by GPS (equipped on the tide gauges) observations, which can be used to unify the sea level obtained by the tide gauges to the reference ellipsoid. Figure 1 shows the relationship between the sea level observed from the satellite altimeter relative to the reference ellipsoid, the sea level obtained from the tide gauges relative to the RLR, and the height of the RLR derived from the GPS relative to the reference ellipsoid. There are approximately 34 tide gauges around Japan listed on the PSMSL website, which have continuous annual data spanning from 1993 to 2019 and joint GPS data. The information on the 34 tide gauge stations and joint GPS stations around Japan is provided in the Appendix of this study. The data from GPS-leveled tide gauges around Japan were selected to validate the SDUST2020 MSS model. Figure 2 shows the data processing procedure used to establish the SDUST2020 model. First, the multi-satellite altimetry data (Table 1), spanning from 1 January 1993 to 31 December 2019 selected from L2P products, were grouped into 19-year-long moving windows shifted by 1 year, starting in January 1993, and nine groups of multi-satellite altimetry data were obtained. Second, the multi-satellite altimetry data of each group were independently processed to establish a global MSS model, including the collinear adjustment of ERM data, ocean variability correction of GM data (addressed by objective analysis and polynomial fitting interpolation), multi-satellite joint crossover adjustment, and the least-squares collocation (LSC) technique for gridding. Third, MSS models with a grid size of ${\mathrm{1}}^{\prime }×{\mathrm{1}}^{\prime }$ were established, resulting in nine MSS models with the same grid size. Finally, the SDUST2020 model was obtained by weighting the weighted average value of the nine models according to the reciprocal square of the estimated SSH error (derived from the LSC technique for gridding) at the same grid point. The calculation method is shown in Eqs. (1) and (2): $\begin{array}{}\text{(1)}& {\mathrm{mssh}}_{i,\mathrm{SDUST}\mathrm{2020}}=\frac{{\sum }_{j=\mathrm{1}}^{\mathrm{9}}\left({\mathrm{mssh}}_{i,j}/{\left({\mathrm{err}}_{i,j}\right)}^{\mathrm{2}}\ right)}{{\sum }_{j=\mathrm{1}}^{\mathrm{9}}\mathrm{1}/{\left({\mathrm{err}}_{i,j}\right)}^{\mathrm{2}}},\text{(2)}& {\mathrm{err}}_{i,\mathrm{SDUST}\mathrm{2020}}=\frac{\mathrm{1}}{\sqrt{{\sum }_{j=\ where mssh[i,SDUST2020] and err[i,SDUST2020] are the SSH and the error of the SSH at the grid point i in the SDUST2020 model, respectively, and mssh${}_{i,\mathrm{SDUST}\mathrm{2020}}\left(j=\mathrm {1},\mathrm{\cdots },\mathrm{9}\right)$ and err${}_{i,j}\left(j=\mathrm{1},\mathrm{\cdots },\mathrm{9}\right)$ are the SSH and the error of the SSH at the grid point i in each of the nine MSS models, 3.1Ocean variability correction The correction of altimeter data for ocean variability is a major challenge when attempting to establish an MSS model (Schaeffer et al., 2012; Pujol et al., 2018). Because the ground tracks of altimetry satellites with ERM coincide with each other, the ocean variability correction of ERM data can be solved using the collinear adjustment method. This method not only makes it possible to remove ocean variability (seasonal and interannual) but also to obtain the mean along-track SSH. The collinear adjustment method used in this study is the same as that described by Yuan et al. Because GM data do not have the characteristics of repeated periods, such as ERM data, the ocean variability correction of GM data cannot be addressed by collinear adjustment. Fortunately, the ocean variability of GM data was obtained simultaneously using ERM data. For example, ERS-1/GM data contain the same ocean variability as the T/P data for the same period (1994–1995). Currently, the main methods for the correction of GM data for ocean variability are objective analysis or the use of polynomial functions (e.g., polynomial fitting interpolation, PFI). The objective analysis method is considered to be the best method to correct the ocean variability of GM data (Schaeffer et al., 2012; Pujol et al., 2018) and has been successfully applied to the establishment of MSS models, such as CLS11 and CLS15. It can be used to interpolate the ocean variability of one or more missions considered as a reference at the spatial and temporal positions of the satellite that would be corrected for ocean variability (Schaeffer et al., 2012). The objective analysis method used in this study is described by Yuan et al. (2021), and further details are provided by Le Traon et al. (1998, 2001, 2003) and Ducet et al. (2000). T/P series (refer to T/P, Jason-1, Jason-2, and Jason-3) satellite altimetry data are widely known to have the highest measurement accuracy. Therefore, the mean along-track SSH of the continuous T/P series during 1993–2019 is used as the basis for calculating the ocean variability of the ERM data. The orbit inclination of T/P series satellites is approximately 66^∘, whereas that of GM satellites is usually greater than 66^∘. For example, the orbital inclinations of ERS-1/168, HY-2A/GM, SARAL/DP, and CryoSat-2 were 98.52, 99.3, 98.55, and 92^∘, respectively. Therefore, the objective analysis method can only correct the ocean variability of GM data within the latitude range of 66^∘S to 66^∘N, whereas that beyond 66^∘S or 66^∘N cannot be corrected. In this study, when correcting GM data (such as ERS-1/168, HY-2A/GM, SARAL/DP, and CryoSat-2) for ocean variability, an objective analysis method was adopted for GM data between 66^∘S and 66^∘N, whereas the PFI method was adopted for GM data beyond 66^∘S or 66^∘N. The basic principle of the PFI method can be expressed as follows: first, a fitting polynomial is used to fit the grid sea-level variation time series to extract the ocean variability, and the least squares solution is used to solve the fitting parameters; second, the ocean variability of GM data (above 66^∘S or 66^∘N) is interpolated with time as the independent variable to complete the ocean variability correction of GM data. The grid sea-level variation time series is the monthly averaged grid sea-level variation time series between 1993 and 2019 provided by AVISO, with a grid of ${\ mathrm{15}}^{\prime }×{\mathrm{15}}^{\prime }$. The fitting polynomial was as follows (Andersen and Knudsen, 2009; Jin et al., 2016): $\begin{array}{}\text{(3)}& \begin{array}{rl}y& =k+B\cdot t+C\cdot \mathrm{cos}\left(\mathrm{2}\cdot \mathit{\pi }\cdot t\right)+D\cdot \mathrm{sin}\left(\mathrm{2}\cdot \mathit{\pi }\cdot t\right)\\ & +E\cdot \mathrm{cos}\left(\mathrm{4}\cdot \mathit{\pi }\cdot t\right)+F\cdot \mathrm{sin}\left(\mathrm{4}\cdot \mathit{\pi }\cdot t\right),\end{array}\end{array}$ where y is the sea-level variation time series, t is the time, k is the bias, B is the trend, C and D are the coefficients of the annual signal, and E and F are the coefficients of the semi-annual 3.2Crossover adjustment The crossover adjustment is an important method for the data fusion of multi-satellite altimetry (Huang et al., 2008). The crossover adjustment method used in this study was performed in two steps: (i) condition adjustment at crossover adjustment and (ii) filtering and predicting the observational corrections along each track. This crossover adjustment method has been described in detail by Huang et al. (2008) and Yuan et al. (2020). In the crossover adjustment, an error model is established to reflect the combined effect of systematic errors (varied in very complicated ways) on the altimeter data. These errors include the radial orbit error, residual ocean variation, residual geophysical corrections, and so on. The error model can be expressed as follows (Huang et al., 2008; Yuan et al., 2020, 2021): where f(t) is the systematic errors; t is the observation time of the SSH; a[0], a[1], b[i], and ${c}_{i}\left(i=\mathrm{1},\mathrm{\cdots },n\right)$ are model parameters to be solved; ω represents the angular frequency corresponding to the duration of a surveying track ($\mathit{\omega }=\mathrm{2}\mathit{\pi }/\left({T}_{\mathrm{1}}-{T}_{\mathrm{0}}\right)$, where T[0] and T[1] represent the start and end times of the surveying track, respectively); and n is a positive integer determined by the length of the track. Based on empirical evidence, n is proposed to be 1–2 for a short track, 3–5 for a middle-long track, and 6–8 for a long track (Huang et al., 2008; Yuan et al., 2020). Because the mean along-track SSH of the continuous T/P series derived from the collinear adjustment is used as the basis of the MSS model, it will remain unchanged and only correct crossover differences for other satellite altimetry data in the process of multi-satellite joint crossover adjustment. The details of the crossover adjustment method used in this study are discussed in Yuan et al. (2020, 2021). In this study, the LSC technique (Hwang, 1989; Rapp and Bašić, 1992) was used for gridding, which has been previously proven to be the most suitable method for gridding (Jin et al., 2016). In the process of gridding with the LSC, a second-order Markov process is used to describe the two-dimensional isotropic covariance function to obtain prior statistical information about the altimeter data and improve the accuracy of gridding. This process can be expressed as follows (Jordan, 1972; Moritz, 1978): $\begin{array}{}\text{(5)}& \mathrm{D}\left(d\right)={D}_{\mathrm{0}}\cdot \left(\mathrm{1}+d/\mathit{\alpha }\right)\cdot {e}^{-d/\mathit{\alpha }},\end{array}$ where d is the two-dimensional distance between the observation point and grid point; D[0] is the local variance parameter, which can be expressed as the variance of all observed data participating in gridding within the local range; and α is the correlation length (where a 50% correlation is obtained). Moreover, an accuracy of $\mathrm{1}/\sqrt{\mathrm{2}}$ times the single-satellite crossover differences after the crossover adjustment was introduced into the LSC as the noise of the corresponding satellite data. In the gridding process, the number of observation points within the range determined by the given search radius needs to be no less than 20, and the search radius is usually twice the grid spacing (e.g., 1^′). When the number of observation points within a given search radius is less than 20, the search radius should be appropriately expanded until the conditions are met. The search method ensures at least five observation data points in each quadrant within the specified search range in the four quadrants centered on the grid point. The purpose of this method is to ensure that the observation data points around the grid point are uniformly distributed, which is conducive to ensuring the accuracy of grid data. To improve the computational efficiency of gridding with the LSC, the globe was divided into several blocks, namely, $\mathrm{20}{}^{\circ }×\mathrm{20}{}^{\circ }$ blocks in the ranges of 80^∘S–60^ ∘N and 0–360^∘, and 126 blocks in total. In the ranges of 60–80^∘N and 0–360^∘, $\mathrm{24}{}^{\circ }×\mathrm{20}{}^{\circ }$ blocks were divided into 18 blocks. In this way, the globe was divided into 144 blocks, of which there are only 141 blocks that have SSH observations; two blocks (40–60^∘N, 60–100^∘W) in the Asian continent and one block (40–60^∘N, 240–260^∘W) in the American continent have no SSH observations. After gridding these 141 blocks, the number of the 141 grid SSH data is merged. When merging, the SSH of grid points on the repeated latitude and longitude lines was the SSH weighted average of grid points in the two adjacent blocks, and the weight was determined by the reciprocal of the square of the SSH error estimate at the grid points to obtain the final gridded global MSS model. 4.1Processing results and analysis of altimetry data Ocean variability correction can eliminate or weaken the influence of sea-level long-wave ocean variation signals, partial satellite radial orbit errors, and residual errors after the correction of geophysical and environmental errors. Ocean variability correction was conducted for the altimeter missions in Table 1 in the global ocean, and the SSHs of these missions before and after ocean variability correction were compared with those of the SDUST2020 model. The statistical results of the comparisons are shown in Table 2, which show the impact of removing the ocean variability. As shown in Table 2, the magnitude of the RMS (between the SSH of each satellite altimetry mission and the SDUST2020 model) was reduced from decimeters before ocean variation correction to centimeters after ocean variation correction. The RMS of the T/P series (T/P+Jason-1+Jason-2+Jason-3) after ocean variation correction was the smallest (0.0119m). Figures 3 and 4 show what could be achieved by correcting the ocean variability of Jason-1/GM. Figure 3 shows the differences between the SSHs of the Jason-1/GM and SDUST2020 model, where ocean variability has not been corrected. Before applying this correction, the differences in SSHs were dominant in the western boundary currents. However, these differences improved significantly after correction for ocean variability (Fig. 4). All of the altimeter missions listed in Table 1 were performed by self-crossover adjustment after completing the correction of ocean variability. Table 3 presents the statistical results of the crossover differences between these missions before and after the self-crossover adjustment. It can be seen from the results in Table 3 that the accuracy of all missions was greatly improved after self-crossover adjustment. The accuracy of the ERM data was improved by approximately 1cm from 1–2cm before adjustment to approximately 1cm after adjustment, while that of the GM data was improved by approximately 2cm from 7–9 to 6–7cm. Moreover, the accuracy of the ERM data (average accuracy of approximately 1cm) was much higher than that of the GM data (average accuracy of approximately 6cm), and the accuracy of different missions was also different. Therefore, the accuracy of each mission is considered in the process of multi-satellite joint crossover adjustment and gridding with 4.2Establishment of the SDUST2020 model According to the procedure of data processing in Fig. 2, the SDUST2020 model was established using a 19-year moving average method from multi-satellite altimetry data (shown in Table 1). The SDUST2020 model is illustrated in Fig. 5, with a grid size of ${\mathrm{1}}^{\prime }×{\mathrm{1}}^{\prime }$ and a global coverage range of 80^∘S–84^∘N, with a reference time spanning from 1 January 1993 to 31 December 2019. As shown in Fig. 5, the global MSS was generally uneven, with the highest SSH of approximately 88m and the lowest SSH of approximately −106m, with a difference of 5Comparison and validation Several independent methods have been proposed to validate the SDUST2020 model. First, we inspected the differences with other MSS models, such as CLS15 and DTU18; then, we compared the MSS models with the data of GPS-leveled tide gauges around Japan; and finally, we compared the independent altimeter data including ERM and GM data. 5.1Comparison with CLS15 and DTU18 models The CLS15 and DTU18 models are representative MSS models published by different institutions (CLS15 published by CLS and CNES and DTU18 published by DTU). In this study, these two models were used to validate the SDUST2020 model. Table 4 shows the information for the SDUST2020, CLS15, and DTU18 models. The main differences between SDUST2020, CLS15, and DTU18 are the reference period and altimeter data ingested. The reference period of SDUST2020 was 1993–2019, while that of CLS15 and DTU18 was 1993–2012. Compared to CLS15 and DTU18, SDUST2020 ingests more altimeter data. Among the altimeter data, Jason-3, HY-2A, and Sentinel-3A (ingested in the SDUST2020 model) were first used to establish an MSS model. ^* T/P for TOPEX/Poseidon, J1 for Jason-1, J2 for Jason-2, J3 for Jason-3, E1 for ERS-1, E2 for ERS-2, EN for Envisat, H2A for HY-2A, C2 for CryoSat-2, S3A for Sentinel-3A, SA for SARAL. Table 5 shows the comparative statistical results of the SDUST2020, CLS15, and DTU18 models in terms of SSH. In the comparison, the ocean variability caused by averaging over distinct periods (27 years from 1993 to 2019 for SDUST2020 and 20 years from 1993 to 2012 for CLS15 and DTU18) was removed, which was calculated from the monthly averaged grid sea-level variation time series between 1993 and 2019 provided by AVISO, with a grid of ${\mathrm{15}}^{\prime }×{\mathrm{15}}^{\prime }$. Compared with DTU18, the SD of SDUST2020 was less than that of CLS15; compared with CLS15, the SD of SDUST2020 was also less than that of DTU18, while compared with SDUST2020, the SD of CLS15 was less than that of DTU18. Therefore, it can be inferred that the accuracy of these three models, from high to low, is SDUST2020, CLS15, and DTU18. If the three models of SDUST2020, CLS15, and DTU18 are not correlated with each other, then according to the error propagation law, the SDs of these three models can be expressed as follows: where SD[S_C], SD[S_D], and SD[C_D] are the SDs of SDUST2020 compared with CLS15, SDUST2020 compared with DTU18, and CLS15 compared with DTU18, respectively; SD[S], SD[C], and SD[D] are the SDs of SDUST2020, CLS15, and DTU18, respectively. According to the statistical results in Table 5, the SDs of SDUST2020, CLS15, and DTU18 can be calculated using Eq. (6), which are approximately 0.1318, 0.1613, and 0.2442m, respectively. This result confirms that the accuracy of these three models, from high to low, is SDUST2020, CLS15, and DTU18. The results listed in Table 5 are the statistical results of the comparison between the three models in the global ocean. A total of 155330402 grid points are counted, including grid points in the coastal regions. After outliers in the differences are rejected by three times SD to avoid contamination by the poor observations around coastal regions, the results are shown in Table 6. It can be inferred that the differences between the three models are around 1–2cm, and the SDUST2020 MSS and CLS15 MSS models have the best consistency. The SSH differences between the SDUST2020, CLS15, and DTU18 models in the long and short wavelengths are shown in Fig. 6 (the SSH differences between SDUST2020 and CLS15), Fig. 7 (the SSH differences between SDUST2020 and DTU18), and Fig. 8 (the SSH differences between CLS15 and DTU18), which were drawn after Gaussian filtering with the tools available in the Generic Mapping Tools version 6.0 (GMT6.0) software (Wessel et al., 2019). Similar to Andersen et al. (2018), a wavelength of 150km was selected as the dividing line between the long and short wavelengths. As shown in Figs. 6, 7, and 8, there were no significant differences between these models in the short wavelength (wavelength less than 150km), and the average differences were within 2cm, whereas there were some significant differences in the long wavelength (wavelengths greater than 150km). The differences between these models at long wavelengths were mainly concentrated in the polar regions and the western boundary current region (including the Kuroshio Current, Gulf of Mexico, Agulhas Current, etc.). There are two reasons: on the one hand, it is related to the large sea level change in these regions (Jin et al., 2016); on the other hand, it is also related to the different altimeter data used and data processing methods implemented in the modeling (Andersen and Knudsen, 2009; Schaeffer et al., 2012; Pujol et al., 2018). At the optimal interpolation (using the LSC technique for gridding) output, a calibrated formal error was obtained. The formal error is caused by three terms: an instrumental noise, a residual effect of the oceanic variability, and an along-track bias. These three terms are complementary and correspond, respectively, to white noise, a spatially correlated noise (at mesoscale wavelengths), and a long-wavelength error that is assumed to be constant along the tracks. The formal error does not match the precision of the MSS but is nonetheless an excellent indicator of the consistency of the grid (Schaeffer et al., 2012; Pujol et al., 2018). Figures 9, 10, and 11 highlight the formal errors in SDUST2020, CLS15, and DTU18, respectively, which indicate that SDUST2020 was much more homogenous and accurate than CLS15 and DTU18. This is also confirmed by worldwide statistics. The average and RMS about the formal error of SDUST2020 were 1.0 and 1.5cm, while those of CLS15 were 1.4 and 1.9cm, and those of DTU18 were 1.9 and 2.0cm. 5.2Comparison with GPS-leveled tide gauges A comparison between 34 GPS-leveled tide gauges around Japan and the SSH of the SDUST2020, CLS15, and DTU18 models was used to independently validate the accuracy differences of the models that are close to the coast (Andersen and Knudsen, 2009). Before the comparison, the SSH obtained from the GPS-leveled tide gauges was adjusted to have the same reference ellipsoid as T/P. It is not clear how wide SSH can be represented by a single tide gauge. The SSH of different models at the location of the tide gauge was calculated by the reciprocal weighting of the spherical distance from the tide gauge to the points, which was determined by different search radii (e.g., 10, 20, 30, 40, and 50km), which were centered on the tidal station. The SSH differences of different models compared with 34 GPS-leveled tide gauges around Japan with different search radii are shown in Fig. 12, and their SDs are listed in Table 7. As shown in Fig. 12 and Table 7, the larger the search radii, the greater the difference between the models and GPS-leveled tide gauges. In Table 7, the SD of the SSH differences between the MSS model and the GPS-leveled tide gauges reaches the decimeter level. The reason may be closely related to the poor observations of offshore altimeter data. The SD of the SSH differences of SDUST2020 compared with the GPS-level tide gauges is smaller than those of CLS15 and DTU18, indicating that the accuracy of SDUST2020 was better than that of CLS15 and DTU18. 5.3Comparison with altimeter data Comparison with the altimeter data can be used to estimate the accuracy of the MSS models (Andersen and Knudsen, 2009; Schaeffer et al., 2012; Jin et al., 2016), which is another effective way to validate MSS models. Several datasets were chosen, including the ERM and GM data. The ERM data were the mean along-track SSH after collinear adjustment, and the GM data were not processed by ocean variability correction. The ERM data included 1-year ERS-1, 2-year HY-2A, 2-year Jason-3, 2.5-year Sentinel-3A, and 1-year Sentinel-3B data, and the GM data included 1.5-year Envisat/GM, 2-month Jason-2/GM, and 1-year HY-2A/GM data. Among these, the data of Sentinel-3B and Envisat/GM were not ingested in the SDUST2020, CLS15, and DTU18 models, and the data of HY-2A, Jason-3, and Sentinel-3A were ingested in the SDUST2020 model, while they were not ingested in the CLS15 and DTU18 models. Table 8 shows the differences in the SDs of the SSH for the SDUST2020, CLS15, and DTU18 models compared to the altimeter data. From the results in Table 8, the differences between the SDs given by these three models are at the millimeter level, but nearly all SDs given by SDUST2020 are lower than CLS15 and DTU18, indicative of higher accuracy. The SDs given by these three models were approximately 4–6cm compared with the ERM data (the first five groups), approximately 10cm compared with GM data (the last three groups), and almost half of the former. The reason may be that the altimeter data of the first five groups have been corrected for the ocean variability, while those of the last group have not. To more accurately assess and quantify the differences in the model errors for SDUST2020, CLS15, and DTU18 at different wavelengths, Sentinel-3B data (1 year, 2019.01.01–2019.12.31) were selected to calculate sea-level anomaly (SLA) along-track based on these three models and to obtain the SLA power spectral density (PSD). Because the Sentinel-3B data were independent of these three models, the difference between the SLA PSDs of Sentinel-3B along-track calculated based on these three models reflected the difference in the error of these three models (Pujol et al., 2018; Sun et al., 2021). Figure 13a shows the mean global SLA PSD along Sentinel-3B tracks when different MSS models were used. As shown in Fig. 13a, all PSDs varied with the wavelength; the longer the wavelengths, the greater the PSDs, and there were also differences between the PSDs of different MSS models for different wavelengths. Since the SSH and MSS were based on independent data and periods, for long wavelengths (e.g., wavelengths longer than 150km), the ocean variability signal dominated, for short wavelengths (e.g., wavelengths from ∼25 to 150km), the errors of MSS models dominated, while for shorter wavelengths (e.g., wavelengths shorter than 25km), the altimeter noise floor dominated (Pujol et al., 2018). The PSD of the SDUST2020 model was significantly less than that of CLS15 and DTU18, and the PSD of CLS15 was slightly smaller than that of DTU18 for wavelengths longer than 150km. The reason for the former was that the reference period of SDUST2020 (1993–2019) was longer than that of CLS15 and DTU18 (1993–2012), and the reason for the latter was that the data pre-processing method for Sentinel-3B is the same as that of the altimeter data ingested in the CLS15 model. This has also been confirmed by worldwide statistics. The average values of the SLA based on SDUST2020, CLS15, and DTU18 were 0.0155, 0.0494, and 0.0596m, respectively, and the RMS values were 0.0525, 0.07919, and 0.0829m, respectively. Figure 13b shows the ratio between the PSD curves in Fig. 13a, which can better quantify the differences between the MSS models. Compared with the CLS15 model, the errors of the SDUST2020 model improved in the wavelength range from 25 to 150km, with a maximal impact of approximately 40km, which is an improvement of approximately 15%. Table 9 lists the SD of the SLA of these three MSS models for wavelengths ranging from 25 to 150km along different altimeter tracks. As shown in Table 9, the accuracy difference among these three models was very small, all at the sub-millimetre level; however, the accuracy of SDUST2020 was slightly better than those of CLS15 and DTU18. The SDUST2020 MSS dataset is available open-access at https://doi.org/10.5281/zenodo.6555990 as an .nc file (Yuan et al., 2022). The dataset includes geospatial information (latitude and longitude) and mean sea surface height. In this study, SDUST2020, a new global MSS model, was established using a 19-year moving average method from multi-satellite altimetry data. Its global coverage was from 80^∘S to 84^∘N with a grid size of ${\mathrm{1}}^{\prime }×{\mathrm{1}}^{\prime }$ and a reference period from January 1993 to December 2019. Firstly, in comparison with the CLS15 and DTU18 models, the SDUST2020 model was innovative in the data processing method of model establishment, namely the 19-year moving average method; secondly, the reference period of the SDUST2020 model extended from 1993 to 2019, while CLS15 and DTU18 only ranged from 1993 to 2012; thirdly, the establishment of the SDUST2020 model integrated the altimeter data of HY-2A, Jason-3, and Sentinel-3A for the first time, which have not been used in the establishment of any other global MSS models. Comparing SDUST2020 with the CLS15 and DTU18 models, the results presented in this study show that the accuracy of these three models, from high to low, is SDUST2020, CLS15, and DTU18. Comparing SDUST2020, CLS15, and DTU18 with the data of GPS-leveled tide gauges around Japan and the altimeter data of several satellites, these results show that the accuracy of SDUST2020 is better than that of CLS15 and DTU18. JY presented the algorithm and carried out the experimental results. JY and JGu prepared the paper and figures with contributions from all the co-authors. JGu, XL and JGa polished the entire manuscript. CZ and ZL downloaded altimeter products and other products in this work. All authors checked and gave related comments for this work. The contact author has declared that none of the authors has any competing interests. Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. We are very grateful to AVISO for providing the along-track Level-2+(L2P) products and the delayed-time gridded monthly mean of sea-level anomalies product, which can be obtained by freely downloading from AVISO's official website (ftp://ftp-access.aviso.altimetry.fr, last access: 5 January 2023). We are also thankful to CLS for providing the CNES_CLS15 MSS (ftp:// ftp-access.aviso.altimetry.fr, last access: 5 January 2023) and DTU for providing the DTU18 MSS (https://ftp.space.dtu.dk/pub/, last access: 5 January 2023). The tide gauge records are available online (https://www.psmsl.org/, last access: 5 January 2023) and the GPS data are available online (https://www.sonel.org, last access: 5 January 2023). This work was partially supported by the National Natural Science Foundation of China (grant nos. 42192535 and 41774001), the Autonomous and Controllable Project for Surveying and Mapping of China (grant no. 816517), the SDUST Research Fund (grant no. 2014TDJH101), and the Scientific Research Foundation for High-level Talents of Anhui University of Science and Technology (grant no. This paper was edited by François G. Schmitt and reviewed by Haihong Wang and two anonymous referees. Andersen, O. B. and Knudsen, P.: DNSC08 mean sea surface and mean dynamic topography models, J. Geophys. Res.-Oceans, 114, 327–343, https://doi.org/10.1029/2008JC005179, 2009. Andersen, O. B., Knudsen, P., and Bondo, T.: The mean sea surface DTU10 MSS-comparison with GPS and Tide Gauges, in: ESA Living Planet Symposium, Bergen, Norway, 28 June–2 July, 2010, https:// articles.adsabs.harvard.edu/pdf/2010ESASP.686E.502A (last access: 5 January 2023), 2010. Andersen, O. B., Knudsen, P., and Stenseng, L.: The DTU13 MSS (mean sea surface) and MDT (mean dynamic topography) from 20 years of satellite altimetry, in: IGFS 2014, International Association of Geodesy Symposia, edited by: Jin, S. and Barzaghi, R., 144, Springer, Cham, https://doi.org/10.1007/1345_2015_182, 2015. Andersen, O. B., Piccioni, G., Stenseng, L., and Knudsen, P.: The DTU15 MSS (mean sea surface) and DTU15LAT (lowest astronomical tide) reference surface, in: Proceedings of the ESA Living Planet Symposium 2016, Prague, Czech Republik, 9–13 May 2016, https://ftp.space.dtu.dk/pub/DTU15/DOCUMENTS/MSS/DTU15MSS+LAT.pdf (last access: 5 January 2023), 2016. Andersen, O. B., Knudsen, P., and Stenseng, L.: A new DTU18 MSS mean sea surface–improvement from SAR altimetry, in: 25 Years of Progress in Radar Altimetry Symposium, Portugal, 24–29 September, https://ftp.space.dtu.dk/pub/DTU18/MSS_MATERIAL/PRESENTATIONS/DTU18MSS-V2.pdf (last access: 5 January 2023), 2018. Carrère, L., Lyard, F., Cancet, M., Guillot, A., and Dupuy, S.: FES 2014: a new global tidal model, OSTST Meeting, Lake Contance, Germany, http://meetings.aviso.altimetry.fr/fileadmin/user_upload/ tx_ausyclsseminar/files/29Red1100-2_ppt_OSTST2014_FES2014_LC.pdf (last access: 5 January 2023), 2014. CNES: Along-track level-2+ (L2P) SLA product handbook, SALPMU-P-EA-23150-CLS, Issue 2.0, https://www.aviso.altimetry.fr/fileadmin/documents/data/tools/hdbk_L2P_all_missions_except_S3.pdf (last access: 5 January 2023), 2020. Ducet, N., Le Traon, P. Y., and Reverdin, G.: Global high-resolution mapping of ocean circulation from TOPEX/Poseidon and ERS-1 and -2, J. Geophys. Res.-Oceans, 105, 19477–19498, https://doi.org/ 10.1029/2000jc900063, 2000. Guo, J., Hwang, C., and Deng, X.: Editorial: Application of satellite altimetry in marine geodesy and geophysics, Front. Environ. Sci., 10, 910562, https://doi.org/10.3389/feart.2022.910562, 2022. Holgate, S. J., Matthews, A., Woodworth, P. L., Rickards, L. J., Tamisiea, M. E., Bradshaw, E., Foden, P. R., Gordon, K. M., Jevrejeva, S., and Pugh, J.: New Data Systems and Products at the Permanent Service for Mean Sea Level, J. Coastal Res., 29, 493–504, https://doi.org/10.2112/JCOASTRES-D-12-00175.1, 2013. Huang, M., Zhai, G., Ouyang, Y., Lu, X., Liu, C., and Wang, R.: Integrated data processing for multi-satellite missions and recovery of marine gravity field, Terr. Atmos. Ocean. Sci., 19, 103–109, https://doi.org/10.3319/TAO.2008.19.1-2.103(SA), 2008. Hwang, C. W.: High precision gravity anomaly and sea surface height estimation from Geos-3/Seasat altimeter data, M.S. Thesis. Dept. of Geodetic Science and Surveying, The Ohio State University, Columbus, OH, USA, 1989. Jin, T., Li, J., and Jiang, W.: The global mean sea surface model WHU2013, Geod. Geodyn., 7, 202–209, https://doi.org/10.1016/j.geog.2016.04.006, 2016. Jordan, S. K.: Self-consistent statistical models for the gravity anomaly, vertical deflections, and undulation of the geoid, J. Geophys. Res., 77, 3660–3670, https://doi.org/10.1029/JB077i020p03660, Le Traon, P. Y., Nadal, F., and Ducet, N.: An improved mapping method of multisatellite altimeter data, J. Atmos. Ocean. Tech., 15, 522–534, https://doi.org/10.1175/1520-0426(1998)015<0522:AIMMOM> 2.0.CO;2, 1998. Le Traon, P. Y., Dibarboure, G., and Ducet, N.: Use of a high-resolution model to analyze the mapping capabilities of multiple-altimeter missions, J. Atmos. Ocean. Tech., 18, 1277–1288, https:// doi.org/10.1175/1520-0426(2001)018<1277:UOAHRM>2.0.CO;2, 2001. Le Traon, P. Y., Faugère, Y., Hernandez, F., Dorandeu, J., Mertz, F., and Ablain, M.: Can we merge GEOSAT follow-on with TOPEX/Poseidon and ERS-2 for an improved description of the ocean circulation?, J. Atmos. Ocean. Tech., 20, 889–895, https://doi.org/10.1175/1520-0426(2003)020<0889:CWMGFW>2.0.CO;2, 2003. Moritz, H.: Least-squares collocation, Rev. Geophys., 16, 421–430, https://doi.org/10.1029/RG016i003p00421, 1978. Pujol, M.-I., Schaeffer, P., Faugère, Y., Raynal, M., Dibarboure, G., and Picot, N.: Gauging the improvement of recent mean sea surface models: a new approach for identifying and quantifying their errors, J. Geophys. Res.-Oceans, 123, 5889–5911, https://doi.org/10.1029/2017JC013503, 2018. Rapp, R. H. and Bašić, T.: Oceanwide gravity anomalies from GEOS-3, Seasat and Geosat altimeter data, Geophys. Res. Lett., 19, 1979–1982, https://doi.org/10.1029/92GL02247, 1992. Santamaria-Gomez, A., Gravelle, M., Dangendorf, S., Marcos, M., Spada, G., and Wöppelmann, G.: Uncertainty of the 20th century sea-level rise due to vertical land motion errors, Earth. Planet. Sc. Lett., 473, 24–32, https://doi.org/10.1016/j.epsl.2017.05.038, 2017. Schaeffer, P., Faugére, Y., Legeais, J. F., Ollivier, A., Guinle, T., and Picot, N.: The CNES_CLS11 global mean sea surface computed from 16 Years of satellite altimeter data, Mar. Geod., 35, 3–19, https://doi.org/10.1080/01490419.2012.718231, 2012. Sun, W., Zhou, X., Yang, L., Zhou, D., and Li, F.: Construction of the mean sea surface model combined HY-2A with DTU18 MSS in the antarctic ocean, Front. Environ. Sci., 9, 697111, https://doi.org/ 10.3389/fenvs.2021.697111, 2021. Wessel, P., Luis, J. F., Uieda, L., Scharroo, R., Wobbe, F., Smith, W. H. F., and Tian, D.: The generic mapping tools version 6, Geochem. Geophy. Geosy., 20, 5556–5564, https://doi.org/10.1029/ 2019GC008515, 2019. Yuan, J., Guo, J., Liu, X., Zhu, C., Niu, Y., Li, Z., Ji, B., and Ouyang, Y.: Mean sea surface model over China seas and its adjacent ocean established with the 19-year moving average method from multi-satellite altimeter data, Cont. Shelf Res., 192, 104009, https://doi.org/10.1016/j.csr.2019.104009, 2020. Yuan, J., Guo, J., Zhu, C., Hwang, C., Yu, D., Sun, M., and Mu, D.: High-resolution sea level change around China seas revealed through multi-satellite altimeter data, Int. J. Appl. Earth Obs., 102, 102433, https://doi.org/10.1016/j.jag.2021.102433, 2021. Yuan, J., Guo, J., Zhu, C., Li, Z., Liu, X., and Gao, J.: SDUST2020 MSS: A global ${\mathrm{1}}^{\prime }×{\mathrm{1}}^{\prime }$ mean sea surface model determined from multi-satellite altimetry data, Zenodo [data set], https://doi.org/10.5281/zenodo.6555990, 2022.
{"url":"https://essd.copernicus.org/articles/15/155/2023/","timestamp":"2024-11-09T16:00:23Z","content_type":"text/html","content_length":"269111","record_id":"<urn:uuid:89ba7cb8-820b-4cbf-982b-7e87f6a274b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00780.warc.gz"}
Commutative property The sum or product of any two operands equals to their corresponding sum or product in reverse order called the commutative property. Let $a$ and $b$ denote two quantities in algebraic form. The addition of them is written as $a+b$ and the multiplication of them is written as $a \times b$. Now, the commutative property is written in following two different forms in mathematics. The sum of two operands equals to the sum of operands in reverse order is called the commutative property of addition. $\large a+b \,=\, b+a$ The product of two operands equals to the product of operands in reverse order is called the commutative property of multiplication. $\large a \times b \,=\, b \times a$ The commutative property is usually used in two different cases. 1. To replace the sum of any two quantities by the sum of the same quantities in reverse order. 2. To substitute the product of any two operands by the product of them in reverse order.
{"url":"https://www.mathdoubts.com/commutative-property/","timestamp":"2024-11-11T00:05:13Z","content_type":"text/html","content_length":"27691","record_id":"<urn:uuid:840d6639-db92-46ee-8001-a235d8d51d5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00097.warc.gz"}
How to Calculate Levenshtein Distance in R - VrcAcademy To calculate levenshtein distance in R, you can use stringdist() function from stringdist package.The Levenshtein distance is distance between two strings is the minimum number of single-character edits required to turn one word into the other. The following example shows how you can do it with syntax. Method: Use stringdist() Function stringdist(string1, string2, method = "lv") The following example shows how to calculate levenshtein distance in R. Use stringdist() Function to Calculate Levenshtein Distance Let’s see how we can calculate levenshtein distance between strings. # Load library # Calculate Levenshtein distance l <- stringdist('Educate', 'Education', method = 'lv') # Print Levenshtein distance [1] 3 As the output shows Levenshtein distance between this two strings is 3.
{"url":"https://vrcacademy.com/tutorials/levenshtein-distance-in-r/","timestamp":"2024-11-06T18:52:01Z","content_type":"text/html","content_length":"48450","record_id":"<urn:uuid:bbd4f059-a337-482c-b805-43ca1e5283ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00898.warc.gz"}
Fundamentals of Actuarial Mathematics 3rd Edition Textbook Solution - 9781118782460 Fundamentals of Actuarial Mathematics 3rd Edition Solutions 31 Students have requested for homework help from this book Provides a comprehensive coverage of both the deterministic and stochastic models of life contingencies, risk theory, credibility theory, multi-state models, and an introduction to modern mathematical finance. New edition restructures the material to fit into modern computational methods and provides several spreadsheet examples throughout. Covers the syllabus for the Institute of Actuaries subject CT5, Contingencies Includes new chapters covering stochastic investments returns, universal life insurance. Elements of option pricing and the Black-Scholes formula will be
{"url":"https://www.crazyforstudy.com/textbook-solutions/fundamentals-of-actuarial-mathematics-3rd-edition-9781118782460/","timestamp":"2024-11-10T17:19:52Z","content_type":"text/html","content_length":"25282","record_id":"<urn:uuid:79240a5a-b169-4006-8adf-b273f0123483>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00209.warc.gz"}
[JAC261] SPM Motor Magnet Coercive Force Optimization For Obtaining Sinusoidal Magnetization Sign in to download the data *Please prepare a license ID and password for the license administrator. *It is different from the service for JMAG WEB MEMBER (free membership). Please be careful. About authentication ID for JMAG website In general, when the magnetic flux density of a gap approaches a sine wave, harmonic components are suppressed and efficiency improves. A method exists to locally change the coercive force of magnets used in SPM to bring magnetic flux density closer to a sine wave. However, the proper coercive force of magnets is difficult to find through a process of trial and error. In situations such as these, by using optimization, coercive force distribution can be obtained effectively. In this example, a magnet is segmented into blocks and coercive force is defined. The coercive force of each block is calculated so that the induced voltage waveform approaches a sine wave through optimization. Coercive force distribution obtained through optimization and the voltage waveform for the same coercive force distribution are compared. Rotor Magnetization The number of poles of the target motor is 4. Therefore, the angle between two poles is 90 degrees. 1 pole uses a magnet block divided into 9 every 10 deg. These are magnetized in the radial direction. Fig. 1 displays the rotor magnet orientation direction. Spatial Magnetic Flux Density Evaluation Fig. 2 shows the spatial magnetic flux density evaluation. The spatial magnetic flux density is found when the rotor is fixed at no load. To determine whether close to a sine wave, the total harmonic distortion (THD) that represents the degree of signal distortion is used. Since THD is the ratio of the square root of the sum of the squared harmonic components to the fundamental component, the smaller the THD the closer the induced voltage waveform is to a sine wave. A different coercive force for each magnet block is set and the THD value is evaluated. Optimization Conditions Fig. 3 shows magnet blocks as design variables, design variable ranges, and objective functions. Design variables are the coercive forces of each magnet block, and the minimum and maximum values indicate the search range. The primary goal is to minimize THD, but in order to prevent sufficient torque not being obtained due to low spatial magnetic flux density, a constraint is imposed to maximize the fundamental. By satisfying these, the generated magnetic flux density is at a maximum and the waveform is close to a sine wave. Objective functions are set to maximize the fundamental and minimize THD using the above gap magnetic flux density evaluation formula. Pareto Curve Fig. 4 shows the Pareto curve of the optimization results. The point shown in green is the initial design, and the points shown in blue are the results obtained from optimization. Here, the design proposal with the smallest THD among the design proposals on the Pareto curve is adopted. The design proposal obtained by optimization is smaller in both fundamental and THD compared to the initial period. Magnet Block Coercive Force Fig. 5 shows the magnet block coercive force at the blue point obtained with optimization. It is understood from the graph that coercive force near the pole center increases THD of the Initial Case and Optimum Case Both initial period and optimized THD is shown is Table 1. Compared to the initial period, optimized THD becomes a value of 1/2 or less. From this, it can be described that the optimization case has a lower ratio of spatial harmonics to the fundamental wave than the initial case, and is a waveform close to a sine wave. Spatial Magnetic Flux Density Waveform Radial direction spatial magnetic flux density waveforms are displayed in Fig. 6. Peaks and valleys are thought to be produced by the positions and angles of the magnets. However, by optimizing the magnet block coercive forces, it can be confirmed that the spatial magnetic flux density has become a waveform close to a sine wave. This is consistent with the results in Table 1 above. Induced Voltage Waveform Fig. 7 shows the induced voltage waveform, and Fig. 8 shows the induced voltage frequency components. Similar to the prior THD and spatial magnetic flux density waveform evaluation results, it can be confirmed that this nears a sine wave through optimization.
{"url":"https://www.jmag-international.com/catalog/261_coercivityoptimizationspm/","timestamp":"2024-11-06T08:42:44Z","content_type":"text/html","content_length":"108334","record_id":"<urn:uuid:1b7eab84-a2d0-4789-abfb-07a9b5ae9380>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00130.warc.gz"}
Meshgrid matlab pdf gilatoria You will need to try to split the problem up or perhaps lower grid resolution if possible. Add a color bar to the graph to show how the data values in c correspond to the colors in the colormap. To me the ordering seems much more straightforward, and you arent limited to only 2 or 3 dimensions. What is the difference between the ndgrid and meshgrid functions in matlab. This matlab function plots vectors as arrows at the coordinates specified in each. If f is singular for some points on the grid, then ezsurf omits these points. Alternatively, if you have a large data set, you can use griddedinterpolant instead of interp3. The function uses the default direction for the light source and the default lighting coefficients for the shading model. The function plots the values in matrix z as heights above a grid in the xyplane defined by x and y. The surface always passes through the data points defined by x and y. In the curve fitting app, select x data, y data and z data curve fitting app creates a default interpolation fit to the data. V contains the corresponding function values at each sample point. Reader philip batchelor thought i could do better in describing meshgrid. X,y meshgridx,y returns 2d grid coordinates based on the coordinates contained in vectors x and y. Converting from volume or image to meshgrid matlab. Are you trying to preallocate the memory for the y matrix that is generated by the meshgrid function. Rotating a 3d meshgrid with rotation matrix matlab. Problem performing calculation with meshgrid variables. Interpolation for 2d gridded data in meshgrid format. Mesh surface plot matlab mesh mathworks switzerland. The first array contains the xcoordinates, the second array contains the ycoordinates, and the third array contains the zcoordinates. X,y meshgrid x,y returns 2d grid coordinates based on the coordinates contained in vectors x and y. The grid represented by the coordinates x and y has lengthy rows and lengthx columns. The default color of 0 0 0 corresponds to black boundaries an rgb triplet is a threeelement row vector whose elements specify the intensities of the. The minimum data value maps to the first color value in the colormap and the maximum data value maps to the last color value in the colormap. The edge colors vary according to the heights specified by z. Xq and yq contain the coordinates of the query points. Vq interp2x,y,v,xq,yq returns interpolated values of a function of two variables at specific query points using linear interpolation. Because of this, ndgrid is better suited to multidimensional problems that arent spatially based, while meshgrid. Generate x and y matrices for threedimensional plots. I never understood why cant there be 2 apis for this in mathematica. How to define a meshgrid x,y between the intersection of two oblique lines. Use meshgrid to define grid compatible with 2d, 3d matlab plotting. Im using meshgrid to create variables x and y for the x and y positions of pixels on the detector, and performing a calculation given in hammersley 1995 to give me an equivalent diffraction angle q2 at any pixel on the detector, using the following code. Meshgrid stretching surface plot matlab answers matlab. Data computed on grids coming from meshgrid will require being transposed please see the example thereafter. Fourquadrant inverse tangent matlab atan2 Instead, you must construct the full grid using meshgrid. Most of scilab 2d or 3d graphical functions like champ, grayplot, sgrayplot, plot3d, contour, etc work with grids generated with ndgrid, not from meshgrid. Matlab wants each coordinates in one separate vector, so one passes 3 separate vectors, while mathematica wants the coordinates in a matrix each row has each x,y,z coordinate. Remember y is rows and thus will be the first index. Edge color, specified as the commaseparated pair consisting of edgecolor and a color name, an rgb triplet, or none. This example shows how to create a 2d grid using meshgrid and ndgrid. X,y meshgridx,y transforms the domain specified by vectors x and y into arrays x and y that can be used for the evaluation. The functions provided here can generate a meshgrid from a domain, convert that to a matrix of column. You can use the meshgrid command to generate two arrays containing the x and ycoordinates at each position in a rectilinear grid. X,y meshgridx,y transforms the domain specified by vectors x and y into arrays x and y, which can be used to evaluate. Meshgridlike function to generate 4d arrays in matlab. You can then display the function by typing z return or by using the mesh or the image command. The meshgrid function transforms the domain specified by two vectors, x and y, into matrices x and y. Well, maybe i was being a bit lazy there, as well as unfair to our writers. Surface plot with colormapbased lighting matlab surfl. The calculated velocities are in a matrix whose dimensions are 246x389 and true velocities are in a matrix whose dimensions are Interpolating gridded data gridded data representation. I had commented that meshgrid is kind of hard to describe in words. Choose a different model type using the fit category dropdown list, e. I assume that x and y have the same values as x1 and y1, but in a different order. This matlab function returns 2d grid coordinates based on the coordinates contained in vectors x and y. X is a matrix where each row is a copy of x, and y is a matrix where each column is a copy of y. Syntax x,y meshgridx,y x,y meshgridx x,y,z meshgridx,y,z description x,y meshgridx,y transforms the domain specified by vectors x and y into arrays x and y, which can be used to evaluate functions of two variables and threedimensional meshsurface plots. This expansion is equivalent to calling meshgrid to generate matrices from. The griddata function interpolates the surface at the query points specified by xq,yq and returns the interpolated values, vq. Meshgrid is to be used for visualizing data and should be used primarily for when plotting two or three dimensional data. I must say i find matlab little easier when it comes to this part. Triangular mesh plot matlab trimesh mathworks australia. To use the nearest data point value, specify the interpolation method as nearest. When x is a vector, the values must be strictly increasing or decreasing. Rectangular grid in nd space matlab ndgrid mathworks. X,y meshgridx,y transforms the domain specified by vectors x and y into arrays x and y, which can be used to evaluate functions of two variables and three. It seems like most matlab functions assume any grids are generated in the meshgrid format, not the ndgrid format, and i have a hard time seeing why meshgrid would ever be preferable. I have a 4 dimension mathematical function i want to implement in matlab, but the meshgrid function works for at most 3 dimensions. So it returns an array 3575321072 which is 357 rows by 532 columns by 1072 slices. Interpolate 2d or 3d scattered data matlab griddata. Syntax x,y meshgrid x,y x,y meshgrid x x,y,z meshgrid x,y,z description x,y meshgrid x,y transforms the domain specified by vectors x and y into arrays x and y, which can be used to evaluate functions of two variables and threedimensional meshsurface plots. In a future release, interp3 will not accept mixed combinations of row and column vectors for the sample and query grids. Interpolation for 3d gridded data in meshgrid format. Create a slice plane orthogonal to the xaxis at the value 0. How to from my vector coordinate to fit my meshgrid matrix. This matlab function returns an nby1 vector y containing the probability density function pdf of the ddimensional multivariate normal distribution with zero mean and identity covariance matrix, evaluated at each row of the nbyd matrix x. The function plots the values in matrix z as heights above a grid in the xy plane defined by x and y. The results always pass through the original sampling of the function. You could do a search for each pair x1,y1, but its faster to sort the coordinates and then reshape z. Since the volume data is not defined for x values of 0. The values in each array vary along a single dimension and are constant along the other dimensions. X and y contain the coordinates of the sample points. Does anyone else find that ndgrid is immensely more useful and easy to understand than meshgrid is. To use the plot function in matlab, you should first make sure that the matricesvectors you are trying to use are of equal dimensions. Conversions between nd rectangular domain, meshgrid and matrix of column vectors. Vectorized meshgrid file exchange matlab central mathworks. Basic drawing elements used by matlab to display data. Calling sequence x, y meshgrid x x, y meshgrid x, y x, y, z meshgrid x, y, z arguments x, y, z. I am trying to run the following code oz interp2ix,iy,iz,ox,oy. How to plot 3d pdf without meshgrid matlab answers. Multivariate normal probability density function matlab. The atan2 function follows the convention that atan2x,x returns 0 when x is mathematically zero either 0 or 0. Learn more about mesggrid, vector, surf, plot3, image matlab. You can understand ordered data by thinking about how. That is, the isosurface connects points that have the specified value much the way contour lines connect points of equal elevation. Learn more about meshgrid, rotation matrix, three dimensions, 3d. While matlab may not be as fast as c, there are ways to bring it closer. However, ndgrid supports 1d to nd while meshgrid is restricted to 2d and 3d. For interp3, a full grid consists of three arrays whose elements represent a grid of points that define a region in r 3. Ndgrid is to be used for higher dimensionality use and for when you want the results to reflect matrixarray notation. Analytical plotting with symbolic math toolbox open live script the fplot family accepts symbolic expressions and equations as inputs enabling easy analytical. The column and row indices of z are the x and y coordinates in the plane, respectively. Specify the interpolation method for the data values. Hi brothers, i am practicing a little bit in julia now, and i want to see how meshgrid if there is any works. P atan2y,x returns the fourquadrant inverse tangent tan1 of y and x, which must be real. Analytical plotting with symbolic math toolbox matlab. I need to compare true velocities with calculated velocities. Matlab performs a linear transformation on the intermediate values to map them to the current colormap.
{"url":"https://critalermeb.web.app/370.html","timestamp":"2024-11-05T06:39:29Z","content_type":"text/html","content_length":"15853","record_id":"<urn:uuid:c5ea2c08-b0bc-4c78-9b25-8dd58cb39fda>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00299.warc.gz"}
How do you integrate int 3/(x^2+x-2) using partial fractions? | HIX Tutor How do you integrate #int 3/(x^2+x-2)# using partial fractions? Answer 1 $= \ln \setminus \frac{x - 1}{x + 2} + C$ #int dx qquad 3/(x^2+x-2)# well #x^2+x-2 = (x+2)(x-1)# thus it is possible for us to state that #3/(x^2+x-2) = A/(x+2) + B/(x-1)# #= (A(x-1) + B(x+2) )/(x^2+x-2)# #\implies A(x-1) + B(x+2) = 3# Since B*3 = 3 and x = 1, B = 1. Since A*(-3) = 3 when x = -2, A = -1. Consequently, the integral becomes #int dx qquad 1/(x-1) - 1/(x+2) # #= ln(x-1) - ln (x+2) + C# #= ln \ (x-1)/(x+2) + C# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To integrate ( \frac{3}{{x^2 + x - 2}} ) using partial fractions, follow these steps: 1. Factor the denominator ( x^2 + x - 2 ) into ( (x + 2)(x - 1) ). 2. Write the original expression as a sum of two fractions with unknown numerators, over each of the factors: ( \frac{A}{x + 2} + \frac{B}{x - 1} ). 3. Clear the denominators by multiplying both sides by ( (x + 2)(x - 1) ). 4. Simplify and equate coefficients to find the values of ( A ) and ( B ). 5. Once you have found ( A ) and ( B ), rewrite the original expression using the partial fractions. 6. Integrate each partial fraction separately. 7. Finally, add the results of the integrations together to get the final solution. The integration of each partial fraction ( \frac{A}{x + 2} ) and ( \frac{B}{x - 1} ) can be done using standard integration techniques, such as logarithmic substitution or direct integration. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-integrate-int-3-x-2-x-2-using-partial-fractions-8f9afa1649","timestamp":"2024-11-14T10:21:36Z","content_type":"text/html","content_length":"578701","record_id":"<urn:uuid:c3b549bb-badb-4c1e-a924-5f0cf16d9d66>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00422.warc.gz"}
100 grain In order to participate in the GunBroker Member forums, you must be logged in with your GunBroker.com account. Click the sign-in button at the top right of the forums page to get connected. 100 grain what is the best caliber to get the best bang for your bullet when using a 100 gr. shell? 6mm ,25-06, 243, 270 or any cal. I'm not thinking of. It is for hunting in the Texas hillcountry or West Texas. thanks Placing my vote for the 257 Roberts. In plus P guise it really shines! 260 Member Posts: 1,133 ✭ IMHO i think the .244 cal in 100 gr. would give you the best SECTIONAL DENSITY I would place my bet with a 25-06, 25 WSSM(my personal favorate 25 cal), 25 Souper, 257 Roberts, or 250 Savage. Of course I would load it with a Barnes Triple Shock. EDIT 1 Don't flatter yourself. Besides, I qualified my answer when I said Barnes Triple Shock bullet. With the construction of that bullet, and the expansion, and weight retention, I don't need to tell you what to do with your sectional density. What is the attraction of a 100 grain bullet? (Not "shell.") Sure, a 100 grain .243 Win, 6mm Rem, .250 Savage, .257 Roberts, or .25-06 will do anything to a deer that needs doing. (Check that rifle and ammunition are available enough for your needs in anything on that list but .243.) But if I were going hunting with a caliber outside of that range, it would have a different weight bullet. Anything using a .264" bullet would get either a 129 or 140 grain bullet, a .270 a 130, and a .30 of most any sort a 150. A .22 centerfire will kill a deer if you are a good game shot, using a 60 grain bullet or so. quote:Originally posted by jefdan what is the best caliber to get the best bang for your bullet when using a 100 gr. shell? 6mm ,25-06, 243, 270 or any cal. I'm not thinking of. It is for hunting in the Texas hillcountry or West Texas. thanks Are you looking for the best caliber and grain of bullet for your' Buck, one which will do the job close and long range under varying conditions and terrain, on different types of game and not tear a hole in your' wallet? If so, then the .270 in 130 grain recommended by Hawk Carse is probably the best for your money that can be cheaply purchased at most Sporting Good Retailers or here on GB! JustC Member Posts: 16,056 ✭✭✭ as stated, the 100gr pill in a 6mm will provide the highest sectional density of any 100gr offering in any caliber. When pushed by a 243win, it will smack deer down like lightning. However, in a 257wthby it will exceed the velocity of the 243, and do as well or even better, but at the cost of more powder and throat erosion. 260 Member Posts: 1,133 ✭ thank you justc many folks just don't really understand things.quote:Originally posted by JustC as stated, the 100gr pill in a 6mm will provide the highest sectional density of any 100gr offering in any caliber. When pushed by a 243win, it will smack deer down like lightning. However, in a 257wthby it will exceed the velocity of the 243, and do as well or even better, but at the cost of more powder and throat erosion. quote:Originally posted by 260 thank you justc many folks just don't really understand things. Originally posted by JustC as stated, the 100gr pill in a 6mm will provide the highest sectional density of any 100gr offering in any caliber. When pushed by a 243win, it will smack deer down like lightning. However, in a 257wthby it will exceed the velocity of the 243, and do as well or even better, but at the cost of more powder and throat erosion. I really understand things more than you think! Although the OP posted a question for 100 grain bullets, they also poised the info for a .25-06, .243, .270 and/or any other caliber! When considering cost of purchasing a low cost off-the-shelf ammo for these calibers, loads with 100 grain bullets are virtually non-existant. I know for a fact that 100 grain loads in .270 are not readily available and believe the same exists for the .25-06. These are mainly available as professional...Custom loads or reloads! When answering the OP's question, myself and I believe that Hawk Carse were both giving the best and cheapest load available off-the-shelf. Besides, a 100 grain bullet shot from a .25-06 has a significantly higher Ballistic Coefficient, higher Velocitites and greater Energy Foot/lbs than a 100 grain bullet shot from a .243! I am a big proponent of caliber vs velocity vs mass (grain of bullet) when it comes to getting the job done for specific game! I prefer rifles chambered in 7mm Remington Magnum and 7mm Remington Ultra Magnum and only use the 140 grain Core-Lokt PSP Remington Factory loads. Now, if you want a game dropping caliber and bullet combo for ranges from zero to 1,000 yards or more, for any conditions or terrain, then try this combo! JustC Member Posts: 16,056 ✭✭✭ ooooooh, I didn't know we were taking cost or availability over the shelf into it[:I] the sectional density of a 100gr .244" pill will always be more than a 100gr pill at any diameter larger than .244". That is simple physics. the Length of a bullet in any given weight, sets that factor. Take 2 bullets manufactured in the same manner, by the same manufacturer, both in 100grs. One is .244" in dia, and the second is .257" in dia. The smaller dia bullet will have a higher sectional density than the larger bullet since it is longer and thinner. That does NOT take into account velocity differences, in either caliber. For a similar comparison, one should, let's say, take a 257-06 (25-06) 100gr pill and test it against a 6mm-06 100gr pill. If the bullets are the same, and leave at equal velocities, the 6mm pill will penetrate deeper due to a higher sectional density. quote:Originally posted by JustC ooooooh, I didn't know we were taking cost or availability over the shelf into it[:I] I may have read into the OP's question wrong but that is how I answered! the sectional density of a 100gr .244" pill will always be more than a 100gr pill at any diameter larger than .244". That is simple physics. the Length of a bullet in any given weight, sets that factor. Take 2 bullets manufactured in the same manner, by the same manufacturer, both in 100grs. One is .244" in dia, and the second is .257" in dia. The smaller dia bullet will have a higher sectional density than the larger bullet since it is longer and thinner. That does NOT take into account velocity differences, in either caliber. For a similar comparison, one should, let's say, take a 257-06 (25-06) 100gr pill and test it against a 6mm-06 100gr pill. If the bullets are the same, and leave at equal velocities, the 6mm pill will penetrate deeper due to a higher sectional density. Again, I was responding with available off-the-shelf loads and won't argue sectional densities of the two, since again, I was referring to these and not reloads or professional loads. I'll agree with JustC about SD. However, I see 100 gr. .257 cal on the shelf around here... However, the original question is pretty much the same argument/question, over and over again, of two deer lying there dead one killed with the .243" dia. bullet and the other killed with a .257" bullet and asking which one is deader? I guess that would go back to the OP. You can take ten different cartridges and load them up and all will kill a deer or elk very effectively and still this argument goes on. Why? Somebody's gotta talk up or down about someone else's choice? In the inimitable words of ESPN's football reviews on the week: C'Mon Man!.... The .30-30 is still killing deer. Shoot what you like! Cartridges come in different sizes and heritages. I'm thinking it's heritages that draws the most attention on this. But they all shoot well. Get what you like and use FWIW, a 100 gr. bullet is available in .243 (Fed, Nos, Horn, Prvi, Win) 6mm Rem (fed, Horn, Win) and 25-06 (Fed, Horn, Nos, Prvi, Win) Sorry .257 Rob. guys, the market doesn't sell enough to make this in your cartridge anymore. Too bad, the 100 gr. and especially 75-87 gr. really shoot well here. a ton of velocity for not much wear and tear on your barrel. Great times shootin' these.
{"url":"https://forums.gunbroker.com/discussion/1826782/100-grain","timestamp":"2024-11-06T18:50:45Z","content_type":"text/html","content_length":"308010","record_id":"<urn:uuid:239ae780-44eb-477e-9262-bf1151a0bf75>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00286.warc.gz"}
Advanced Scientific Mathematics Welcome to the Advanced Scientific Mathematics section of . This contains a set of activities designed to develop the advanced applied mathematical skills needed to make the most of the study of the physical sciences or engineering at university. These problems naturally follow on from the more elementary core scientific mathematics section of stemNRICH and are primarily intended for those who intend to study physics, engineering or applied mathematics at university. │ Area of maths │Style│ Question │ Description │ │ │Image│Maths Filler 2 │Practise looking at rates of change as this vessel fills with water. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Immersion │Various solids are lowered into a beaker of water. How does the water level rise in each case? │ │Rates of change ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Brimful │Rotate the curves to make some mathematical vessels. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Brimful 2 │Which of these mathematical flasks will eventually fill up with water? │ │ │Image│Curve Fitter │Use your skill to try to fit a cubic equation through these three points. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Curve Fitter 2 │Can you make a cubic which has a certain distance between the turning points? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │Curves │Image│Implicitly │Can you find the shape of this implicity defined function? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Whose line graph is it anyway?│Which line graphs, processes and equations go together? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Scientific Curves │Can you sketch these difficult curves, which have uses in mathematical modelling? │ │ │Calculus is involved in many problems on stemNRICH. See the physNRICH and engNRICH pages for various particular examples. │ │ ├─────┬──────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Calculus Countdown │Can you hit the target functions using a set of input function and a little calculus and algebra? │ │Calculus ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Integration Matcher │Which curves, integrals and differentials go together? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Operating Machines │What functions can you make using the function machines RECIPROCAL and PRODUCT and the operator machines DIFF and INT? │ │ │You will also find lots of differential equations problems in various sections on the physNRICH and engNRICH pages │ │ ├─────┬──────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │Differential equations │Image│Differential Equation Matcher │Match the descriptions of physical processes to these differential equations. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Reaction Rates │Explore the possible mathematical solutions to the non-linear order of reaction equation from chemistry. │ │ │Image│Production Equation │Each week a company produces X units and sells p per cent of its stock. How should the company plan its warehouse space? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Stirling Work │See how enormously large quantities can cancel out to give a good approximation to the factorial function. │ │Series and expansions ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│What Do Functions Do for Tiny │How do these familiar functions behave for very small values? │ │ │ │x? │ │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Bessel's Equation │Get further into power series using the fascinating Bessel's equation. │ │ │See also the Logarithms and pH problems on the chemNRICH pages │ │Powers, roots and ├─────┬──────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │logarithms │Image│Power Match │Can you locate these values on this interactive logarithmic scale? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Debt Race │Who will be the first investor to pay off their debt? │ │ │Image│pdf Matcher │What scientific stories can you match to these pdf curves? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Circle PDF │How can an arc of a circle be a pdf? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │Probability and │Image│Scale Invariance │By exploring the concept of scale invariance, find the probability that a random piece of real-world data begins with a 1. │ │distributions ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Into the Exponential │Get into the exponential distribution through an exploration of its pdf. │ │ │ │Distribution │ │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Into the Normal Distribution │Get into the normal distribution through an exploration of its pdf. │ │ │Image│Stats Statements │This question gives you 10 statistical statements. Develop your statistical intuition by asking yourself are they sometimes, always, │ │ │ │ │nearly always, almost never or never true?' │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Overbooking │Why do airlines overbook? │ │Statistics ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│The Wrong Stats │Why MUST these statistical statements be at least a little bit wrong? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Aim High │How much wheat should this farmer plant to minimise his expected loss? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Time to Evolve 2 │How would you model the time between your birth and that of your grandfather? │ │ │Image│Flight Path │Use simple trigonometry to calculate the distance along the flight path from London to Sydney. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Loch Ness │Draw graphs of the sine and modulus functions and explain the humps. │ │Trigonometry ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Spherical Triangles on Very │Find out about spherical triangles and decide if telecoms engineers need to know about such things. │ │ │ │Big Spheres │ │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Taking Trigonometry Seriesly │Look at the advanced way of viewing sin and cos through their power series. │ │ │Image│ │A nice introduction to complex numbers, including many exercises for the reader. │ │Complex numbers ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │More on the way │More on the way │ │ │Image│Spotting the Loophole │A visualisation problem in which you search for vectors which sum to zero from a jumble of arrows. Will your eyes be quicker than │ │ │ │ │algebra? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Vector Walk │Starting with two basic vector steps, which destinations can you reach on a vector walk? │ │Vectors ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Polygon Walk │Follow Ulaf, Vicky and Wilber on a vector walk to determine the locations nearest to the origin. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Air Routes │An application of vectors and scalar products in a very practical setting. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│ │Explore the meaning of the scalar and vector products and see how the two are related │ │ │Image│Square Pair │What quadrilaterals can you transform this pair of squares into? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Transformations for 10 │Explore the mathematics of matrix transformations with these 10 individual questions. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Matrix Meaning │Explore the algebraic and geometric properties of matrices with these 5 individual questions. │ │Vectors and matrices ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Crystal Symmetry │Use vectors and matrices to explore the symmetry properties of Caesium Chloride. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Nine Eigen │Explore how matrices can fix vectors and vector directions.? │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │ │Can you make matrices which will fix one lucky vector and crush another to zero? │ │ │Image│Root Hunter │In this short problem, try to find the location of the roots of some unusual functions by finding where they change sign. │ │ ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │Numerical methods ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │Image│Building Approximations for │Build up the concept of the Taylor series. │ │ │ │Sin(x) │ │ │Decision making and │Image│Testing Strategy │Investigate ways in which you could implement a scoring system for a multiple choice test. │ │algorithms ├─────┼──────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │. │. │. │
{"url":"http://nrich.maths.org/advanced-scientific-mathematics","timestamp":"2024-11-07T02:55:43Z","content_type":"text/html","content_length":"82143","record_id":"<urn:uuid:36f17973-a377-4dbe-8d1e-a4f73138e487>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00806.warc.gz"}
Using Power Laws to predict when the Bitcoin price will hit $1 million The following is a guest post by Rajagopal Menon, Vice President of WazirX. When a bull market arrives, models emerge to predict the price of Bitcoin. During the last bull market of 2021, the stock-to-flow (S2F) model became a feature of the season. The model created by Plan B assessed asset scarcity by comparing inventory to annual production. Applying the S2F model to Bitcoin highlighted its “digital gold” potential and provided long-term price predictions based on scarcity. However, the S2F model will disappear in the crypto winter of 2022. But don’t worry. The current bull market has seen the emergence of a new model called the power law model, which claims to predict the price of Bitcoin with amazing accuracy. Understand power laws In a world seemingly filled with chaos and randomness, scientists have uncovered hidden patterns and relationships known as power laws. These laws provide a framework for understanding how different phenomena interact and reveal consistent mathematical patterns that govern different aspects of the universe. Power law in daily life Power laws are interesting mathematical relationships that appear in numerous phenomena and provide insight into the underlying simplicity of complex systems. They describe how two quantities are related to each other, such that a change in one quantity causes a proportional change in the other quantity. This relationship spans many scales, from the microcosm to the cosmos, and affects biology, society, technology, and natural phenomena. animal size limits Galileo’s square-cube law is a classic example of a power law in nature and explains how an animal’s size affects its strength. As animals grow larger, volume and weight increase much faster than physical strength. This law sets natural limits and explains why large animals have thick bones and why the largest animals are found in aquatic environments where their weight is offset by buoyancy. metabolic rate Max Clever’s work on metabolic rate further demonstrates the applicability of power laws. This reveals that an organism’s metabolic rate is proportional to its mass to the 3/4th power, indicating that larger animals are more energy efficient. This principle has profound implications for our understanding of species life cycles, growth rates, and sustainability. Natural phenomena and human activities Power laws govern a variety of phenomena, from the distribution of earthquake sizes to the frequency of words in a language. These explain why we observe a small number of important events along with a large number of smaller cases. For example, Zipf’s law describes the frequency of words in a language and emphasizes that common words occur disproportionately compared to less frequent words. Beyond natural phenomena Power laws extend to human activities such as economics, finance, and technology. They elucidate the distribution of wealth, where a small number of individuals own a significant portion of the wealth. A power law in technology describes how content interacts on the Internet, with a small number of popular nodes and a large number of less popular nodes forming a long tail of distribution. Bitcoin power law Astrophysicist Giovanni Santasi discovered this connection. He said 15 years of data shows that Bitcoin also follows the power law principle. Santostasi first shared the power law model on his r/ Bitcoin subreddit in 2018. However, the model was revived in January after financial YouTuber Andrei Jeikh mentioned it to his 2.3 million subscribers in a video. Giovanni’s theory suggests that Bitcoin prices are not as random as they seem. Despite the randomness, in the long run, the price of Bitcoin follows a certain mathematical model. It’s not just a mathematical formula where someone drew a line. Instead, it follows a power law like the one observed throughout the universe. The yellow line represents the current price and the red line represents the support line. A support line is a level below which Bitcoin typically does not fall. The green line is the linear regression line, which is like the fair value price that Bitcoin will eventually return to, and the purple line is the resistance line where Bitcoin typically reaches its maximum value. Predicting the future of Bitcoin Santostasi’s power law model graphs the trajectory of Bitcoin’s price with amazing accuracy. This is a graph showing the current price of Bitcoin, a support line showing the level Bitcoin usually does not fall below, a linear regression line showing the fair value price, and a resistance line showing the level Bitcoin usually reaches before falling. is showing. This model highlights Bitcoin’s surprisingly linear growth, especially when outliers are removed. Despite occasional fluctuations, Bitcoin’s overall trajectory follows a clear pattern reminiscent of other phenomena governed by power laws. Impact on investors The power law model provides interesting insight into Bitcoin’s potential future peaks. According to Santostasi’s analysis, Bitcoin could peak at $210,000 in January 2026 and then fall to around $60,000. He went on to predict that Bitcoin will be worth $1 million by July 2033. While mathematical models provide valuable insights, they are not free from error and may not account for unforeseen events that can significantly impact prices. “All models are broken, but some are useful” means that the models may not be perfect, but can still provide valuable insights. Models such as power law models and equity-to-flow models for predicting Bitcoin prices have flaws and limitations. For example, Julio Marino of Crypto Quant pointed out problems with power law models, such as underestimating errors and giving a misleading impression of accuracy. Interestingly, both power law and stock-to-flow models face similar criticisms. Despite their flaws, they have historically made roughly the same predictions for Bitcoin’s price. However, over time, predictions can diverge. If these models are correct, the question arises: why bother using traditional investment strategies like 60/40 portfolios? Some argue that a new model to explain Bitcoin’s movements could yield better returns. Some may think these models are worthless, but others, like the person I’m talking to, think they still have value. The scarcity caused by Bitcoin’s fixed supply is contributing to the price rise. Additionally, factors such as M2 growth also affect Bitcoin price. Although models provide useful insights, they cannot predict the future. Even if the model is flawed, Bitcoin’s trajectory appears to be upwards. Therefore, it is essential to consider these models, but also to be aware of their limitations. Bitcoin Hit laws million Power predict Price
{"url":"https://thefinvest.com/using-power-laws-to-predict-when-the-bitcoin-price-will-hit-1-million/","timestamp":"2024-11-09T03:16:38Z","content_type":"text/html","content_length":"69535","record_id":"<urn:uuid:c50c44ac-818c-40ff-b8eb-8f9f685d35e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00479.warc.gz"}
Operators new/delete I decided to talk a little about memory management operators from native C++ because there are some thoughts and practices that I keep in my mind. Well, I completely ignore the defaults new/delete, I pretend that these things don’t exist in the language, that’s not only because the C++ guidelines and good practices encourages you to keep object resource well defined and automatic, it’s also a portability thing, game engines needs to use the memory management functions provide by the target platform, otherwise, things may not work properly or just crash because the alignment for that platform/architecture was not set. Here’s a simple object: struct Object Object(const char* Name, const uint32_t Flags = 0) : Flags{Flags} strcpy(this->Name, Name); char Name[64] = { 0 }; uint32_t Flags = 0; In the main file we create it: int main() Object* lNewObject = new Object("My name"); delete lNewObject; return EXIT_SUCCESS; The Object creation is not clear if was user defined or it uses default C++ implementation. Of course, it is an abstraction, but I don’t think it is a good practice since the user can simply assume that a operator new C++ implementation will be called and this can be wrong. Another thing is there’s no need to dynamic allocate memory for an object that don’t have virtual functions/destructor that can be accessed by its base pointer. So, what can be done is implement operator new with an assertion to prevent objects there aren’t polymorphic to be “new allocated”. template < typename T > struct PolymorphicNewDelete static void* operator new(size_t Size) static_assert(std::is_polymorphic<T>::value, "no need to dynamic allocate."); return ::operator new(Size); static void operator delete(void* Ptr) static_assert(std::is_polymorphic<T>::value, "no need to free."); ::operator delete(Ptr); struct Object : PolymorphicNewDelete<Object> // PREVIOUS CODE PolymorphicNewDelete implements operator new and operator delete with just “is_polymorphic” static asserts. And then using CRTL we can inherit to Object. I prefer use placement operator new with functions create/destroy, I believe that it’s more direct and precise. inline void* Allocate(size_t Size) return malloc(Size); // Platform memory allocation function inline void* Allocate(size_t Size, size_t Alignment) return malloc(Size); // Platform memory allocation function inline void* Free(void* Ptr, size_t Size) return free(Ptr); // Platform memory deallocation function So, here I made basic memory functions to call platform memory functions. The one that uses alignment is just the same as the one that don’t uses alignment, but we also can use the windows api _aligned_malloc and _aligned_free, for linux the posix_memalign or aligned_alloc. template < typename T, typename ... TArgs > T* Create(TArgs&&... Args) static_assert(std::is_polymorphic<T>::value, "no need to dynamic allocate."); void* Memory = Allocate(sizeof(T), alignof(T)); #ifndef NDEBUG printf("Failed to allocate '%llu' bytes\n", sizeof(T)); return nullptr; return new (Memory) T{std::forward<TArgs>(Args)...}; template < typename T > void Destroy(T* Ptr) static_assert(std::is_polymorphic<T>::value, "no need to free."); Free(Ptr, sizeof(T)); The Create function allocates memory for object of type T and construct it using operator placement new, but at first, it check if object of type T is polymorphic, in other words, Create function only compiles if object of type T was polymorphic. The Destroy function basically calls the destructor and frees memory, but only if object of type T was polymorphic. template < typename T > T* CreateArray(size_t Size) T* lArray = static_cast<T*>(Allocate(sizeof(T)*Size, alignof(T))); #ifndef NDEBUG printf("Failed to allocate '%llu' bytes\n", sizeof(T)); return nullptr; return std::uninitialized_default_construct_n(lArray, Size); template < typename T > void DestroyArray(T* Array, size_t Size) std::destroy_n(Array, Size); Free(Array, sizeof(T)*Size); The CreateArray always allocates an C array and default construct all elements. The DestroyArray destructs all elements and free the memory. A downside in CreateArray and DestroyArray functions is the user needs to specify always the count value for the array. If we pass zero or a value greater than the allocated array size to the DestroyArray function, there will be segmentation fault. We can use fat pointers to do it. Fat pointers are basically pointers that have additional information carried by them. template < typename T > struct ArrayPtr size_t Size; T* Ptr; template < typename T > ArrayPtr<T> CreateArray(size_t Size) /* CODE*/ template < typename T > void DestroyArray(ArrayPtr<T> Array) /* CODE*/ Here’s a very structured and explicit way to do it. A fat pointer to array pointer, that I will show, works basically like this but it is compact to just an allocation with count, data and, the data pointer is shifted. template < typename T > T* CreateArray(size_t Size) const auto lSize = sizeof(T) * Size + sizeof(size_t); auto lMemory = static_cast<size_t*>(Allocate(lSize)); #ifndef NDEBUG printf("Failed to allocate '%llu' bytes\n", sizeof(T)); return nullptr; *lMemory = Size; // Set array size to first location in memory buffer auto lArray = reinterpret_cast<T*>(lMemory + 1ull); // Advance the pointer in one unit of (size_t*) return std::uninitialized_default_construct_n(lArray, Size); Now CreateArray returns a fat pointer to target array, to get the array size we can just get previous size_t*. template < typename T > size_t ArraySize(T* Array) return *(reinterpret_cast<size_t*>(Array) - 1ull); So, the DestroyArray looks like this: template < typename T > void DestroyArray(T* Array) const auto lSize = ArraySize(Array); std::destroy_n(Array, lSize); Free(Array, sizeof(T)*lSize); It just get array size to be able to call destruction for the elements. In conclusion, operator new/delete in its default form I don’t think that it is useful with you want control over your program, using functions like these above its more simple, controlled and explicit. Of course, there is no need to use fat pointers to allocate arrays or memory blocks, using structures with pointer and size makes the api even more explicit and simpler, I would say.
{"url":"https://codingwithbruno.dev/operators-new-delete/","timestamp":"2024-11-05T21:43:59Z","content_type":"text/html","content_length":"66259","record_id":"<urn:uuid:97562226-148e-43be-97ff-e8f295d07125>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00191.warc.gz"}
Asymmetric Cryptography • A way to send a message encrypted for specific recipients such that anyone can verify the sender’s authenticity but only intended recipient can read the message contents. • Asymmetric cryptography, also known as public key cryptography, uses public and private keys to encrypt and decrypt data. The keys are simply large numbers that have been paired together but are not identical (asymmetric). One key in the pair can be shared with everyone; it is called the public key. The other key in the pair is kept secret; it is called the private key. Either of the keys can be used to encrypt a message; the opposite key from the one used to encrypt the message is used for decryption.
{"url":"https://cryptowiki.me/wiki/Asymmetric_Cryptography","timestamp":"2024-11-11T04:36:21Z","content_type":"text/html","content_length":"21342","record_id":"<urn:uuid:706d0f81-c974-4142-bab3-7e752d1b0ff8>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00220.warc.gz"}
PC Seminar: Monomial identities in the Weyl algebra • Date: 31 October 2024, 10:15–11:15 • Location: Ångström Laboratory, 64119 • Type: Seminar • Lecturer: Stephan Wagner (Uppsala/Graz) • Organiser: Matematiska institutionen • Contact person: Fiona Skerman Stephan Wagner gives this seminar. Welcome to join! Abstract: The Weyl algebra has two generators D and U and the defining relation DU - UD = 1. One possible interpretation is that D is the differentiation operator, and U is multiplication by x, both acting on polynomials or power series in x. It is possible that distinct monomials in the Weyl algebra are equal, for example UDDU = DUUD. This caused Richard Stanley to put forward several questions and conjectures about equivalence classes of monomials. In this talk, the solutions to these questions will be discussed: specifically, a combinatorial characterization of equivalence classes of monomials and several enumerative results on these equivalence classes. Joint work with Darij Grinberg, Tom Roby, and Mei Yin. This is a seminar in our seminar series on Probability and Combinatorics (PC).
{"url":"https://www.uu.se/en/department/mathematics/research/probability-theory-and-combinatorics/seminar-series-in-probability-and-combinatorics/archive/2024-10-31-pc-seminar-monomial-identities-in-the-weyl-algebra","timestamp":"2024-11-05T07:43:04Z","content_type":"text/html","content_length":"91576","record_id":"<urn:uuid:7ba46c0a-fee2-4844-acbc-da55b7414736>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00857.warc.gz"}
Convert measurement to lineal metre convert measurement to lineal metre Related topics: how to solve large simultaneous equations mathematica algebraic structure\pdf factor theorem of polynomial long division online calculator elipse formula exponent root square how do you tell how many answers a quadradic equation has? binomial fraction calculator online zero-product property calculator Author Message Inik Posted: Friday 08th of Jan 10:50 Vrèrijeon Hi guys I require some aid to solve this convert measurement to lineal metre which I’m unable to do on my own. My test on math is due and I need guidance to work on rational equations, cramer’s rule and trigonometry . I’m also thinking of hiring a math tutor but they are not economical . So I would be really value if you can extend some guidance in solving the Back to top Jahm Xjardx Posted: Saturday 09th of Jan 17:36 Can you give some more details about the problem? I can help if you clarify what exactly you are looking for. Recently I came across a very handy software program that helps in understanding math problems quickly . You can get help on any topic related to convert measurement to lineal metre and more, so I recommend trying it out. From: Odense, Denmark, EU Back to top Sdefom Posted: Saturday 09th of Jan 21:01 Koopmansshab I too have had difficulties in multiplying fractions, graphing and adding functions. I was told that there are a number of programs that I could try out. I tried out many but then the best that I discovered was Algebrator. Merely typed in the problem and hit the ‘solve’. I got the answer instantly . Additionally , I was guided through to the answer by an effortlessly comprehensible step-by-step process . I have relied on this program for my problems with College Algebra, Intermediate algebra and Algebra 1. If I were you, I would surely go for this Back to top erx Posted: Sunday 10th of Jan 16:23 I am a regular user of Algebrator. It not only helps me get my assignments faster, the detailed explanations provided makes understanding the concepts easier. I strongly advise using it to help improve problem solving skills. From: PL/DE/ Back to top Damien Ehi Posted: Tuesday 12th of Jan 13:49 To begin with, thanks for replying guys ! I really want to buy this program. Can you please tell me how to purchase this software? Can we order it through the web, or do we buy it from some retail store? From: Canada Back to top Admilal`Leker Posted: Thursday 14th of Jan 09:47 Thanks pals for all your replies . I have got the Algebrator from https://softmath.com/faqs-regarding-algebra.html. Just got it set and began using it. Its terrific . The exercise questions can test the real knowledge that we have on Basic Math. I am thankful to you all! From: NW AR, Back to top
{"url":"https://softmath.com/algebra-software/long-division/convert-measurement-to-lineal.html","timestamp":"2024-11-10T07:38:34Z","content_type":"text/html","content_length":"42880","record_id":"<urn:uuid:f4cfafb9-e224-403a-b1a0-b57a92e23171>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00897.warc.gz"}
Khan Academy Nonprofit – Free Online Educational Videos Here is an awesome resource for teachers and students! Khan Academy, a nonprofit global classroom for anyone in the world who has access to a computer, has a library of thousands of videos online that are free. This provides quality instruction to people all over the world, no matter where they are located. They also offer Teacher Resources as well. Here is an example of some of the topics they cover: ALGEBRA (many lessons in each of these subtopics): • Algebra Intro • Linear Equations • Inequalities • Rations & Proportions • Absolute Value • Exponents and Radicals • Logarithms • Polynomials • Quadratics • Functions • Conic Sections • Complex Numbers • Matrices It’s easy to see by this listing that there are many lessons from which to choose. Here is a partial list of more topics without subtopics listed: • American Civics • Arithmetic & Pre-Algebra • Art History (for many different eras) • Astrology • Banking & Money • Biology • Brain Teasers • Cryptography • Calculus • Chemistry • Differential Equations • Economics • Finance • Geometry • Healthcare & Medicine • History • Physics • Statistics • Trigonometry • Computer Science Khan Academy is a global classroom of students who learn at their own rate and choose what they want to study. Here are reviews and stories of the academy so you can read first hand from teachers and students all over the world. You must be logged in to post a comment. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://healthyhomeblog.com/2020/12/khan-academy-free-online-educational-videos/","timestamp":"2024-11-11T14:48:23Z","content_type":"text/html","content_length":"39196","record_id":"<urn:uuid:41fd07ab-45da-49e4-ad7d-03f2e7651392>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00177.warc.gz"}
An Etymological Dictionary of Astronomy and Astrophysics Fr.: braconnier A person who trespasses on private property, especially to catch fish or game illegally (Dictionary.com). See also → hunter. → poach; → -er. Fr.: braconnage The illegal taking of wildlife, in violation of local, state, federal or international law. → poach; → -ing. Pogson's ratio وابر ِپوگسون vâbar-e Pogson Fr.: rapport de Pogson The constant 2.512, which is the 5th → root of 100 (2.512^5 = 100); the ratio between two successive stellar → magnitudes. → Pogson's relation; → ratio. Pogson's relation بازانش ِپوگسون bâzâneš-e Pogson Fr.: relation de Pogson The equation that expresses the → magnitude → difference between two objects in terms of the → logarithm of the → flux → ratio: I[1]/I[2] = 2.5^(m[2] - m[1]), or m[2] - m[1] = 2.5 log(I[1]/I[2]), where m is → apparent magnitude, I flux, and log the logarithm to base 10. Named after Norman Robert Pogson (1829-1891), the English astronomer, who introduced the magnitude scale in 1856; → relation. Poincaré recurrence theorem فربین ِبازآمد ِپوآنکاره farbin-e bâzâmad-e Poincaré Fr.: théorème de récurrence de Poincaré In an → isolated system, any initial state will occur again in the course of the → evolution of the system over a sufficiently long but finite → time. → Poincaré sphere; → recurrence; → theorem. Poincaré sphere کرهی ِپوآنکاره kore-ye Poincaré Fr.: sphère de Poincaré A representation that permits an easy visualisation of all different states of → polarization of a vector wave. The equator represents → linear polarization; the north pole corresponds to right-circular and the south pole to left- → circular polarization. Named after Henri Poincaré (1854-1912), French mathematician and theoretical physicist, and a philosopher of science; → sphere. Poinsot's motion جنبش ِپویءنسو jonbeš-e Poinsot Fr.: mouvement à la Poinsot The motion of a torque free rotating rigid body in space, in general whose angular velocity vector precesses regularly about the constant angular momentum factor. After Louis Poinsot (1777-1859), French physicist and mathematician. He was the inventor of geometrical mechanics, showing how a system of forces acting on a rigid body could be resolved into a single force and a couple. ۱) نقطه، پنده؛ ۲) آماجیدن 1) noqté (#), pandé (#); 2) âmâjidan Fr.: 1) point; 2) pointer 1a) General: A sharp or tapering end, as of a dagger; a projecting part of anything. 1b) Physics: Position or time of occurrence, as in boiling point, freezing point, etc. 1c) Math.: A dimensionless geometric element whose location in space is defined solely by its coordinates. 2) To direct a telescope toward a particular position on the sky. M.E. point(e); O.Fr. point "dot, mark, place, moment;" L. punctum noun use of neuter p.p. of pungere "to prick, pierce." 1) Noqté, loan from Ar. Pandé, variants in classical dictionaries pindé, pendé, fand "a point, dot, mole, freckle;" cf. Skt. prānta- "point, tip, border," from pra "before, forward," → pro-, + ánta- "end, limit, term;" Pali, panta- "remote, solitary;" Prakrit panta " last;" Sindhi pandu "border of a garment;" Lahnda pand, pad "end, top of sugar cane." 2) à mâjidan, verb from âmâj "aim, goal," from Proto-Iranian base *āma-, from prefix *ā- + *ma- "to measure;" cf. Av. mati- "point, tip;" O.Pers./Av. mā(y)- "to measure;" Pers. mun/mân "measure," as in Pers. terms pirâmun "perimeter," âzmun "test, trial," peymân "measuring, agreement," peymâné "a measure; a cup, bowl;" cf. Skt. mati "measures," matra- "measure," Gk. metron "measure," L. metrum; PIE base *me- "to measure." point mass نقطهجرم، پندهجرم، جرم ِنقطهوار، ~ پندهوار noqté jerm, pandé jerm, jerm-e noqtevâr, ~ pandevâr Fr.: masse ponctuelle A hypothetical object which can be thought of as infinitely small. → point; → mass. point source نقطهخن، پندهخن، خن ِنقطهوار، ~ پندهوار noqté xan, pandé xan, xan-e noqtevâr, pande-ye ~ Fr.: source ponctuelle A source of radiation at a great distance from the observer; an ideal source of infinitesimal size. → point; → source. point spread function (PSF) کریای ِگسترش ِنقطه، ~ ~ پنده karyâ-ye gostareš-e noqté, ~ ~ pandé Fr.: fonction d'étalement du point The two-dimensional intensity distribution about the image of a point source. → point; → spread; → function. The two stars that form the front of the Big Dipper's bowl, away from the handle. More specifically, the stars Dubhe (α Ursae Majoris) and Merak (β Ursae Majoris). A line through β to α passes close to the North Star and they are used for finding it. → point + -er. Dorahnemâ, literally "the two guides," from do "two" + rah, râh "way, path" (from Mid.Pers. râh, râs "way, street," also rah, ras "chariot;" from Proto-Iranian *rāθa-; cf. Av. raθa- "chariot;" Skt. rátha- "car, chariot," rathyā- "road;" L. rota "wheel," rotare "to revolve, roll;" Lith. ratas "wheel;" O.H.G. rad; Ger. Rad; Du. rad; O.Ir. roth; PIE *roto- "to run, to turn, to roll") + nemâ agent noun of nemudan "to show" (Mid.Pers. nimūdan, nimây- "to show," from O.Pers./Av. ni- "down; into" (Skt. ni "down," nitaram "downward," Gk. neiothen "from below," cf. E. nether, O.E. niþera, neoþera "down, downward, below, beneath," from P.Gmc. *nitheraz, Du. neder, Ger. nieder; PIE *ni- "down, below") + māy- "to measure;" cf. Skt. mati "measures," matra- "measure;" Gk. metron "measure;" L. metrum; PIE base *me- "to measure"). Fr.: pointage The act or process of directing a telescope. → point. The direction in the sky to which the telescope is pointed. Pointing also describes how accurately a telescope can be pointed toward a particular direction in the sky. Verbal noun of → point. pointing model مدل ِآماجش model-e âmâješ Fr.: modèle de pointage A mathematical model that reproduces the diurnal rotation of the Earth and is used to direct a telescope toward a precise position on the sky. → pointing; → model. Fr.: poise The unit of viscosity in the c.g.s. system, equal to 1 dyne.s/cm^2. Symbol: P Poise, from Jean-Louis-Marie Poiseuille (1797-1869), a French physiologist and physician who studied the flow of liquids through tubes and developed a method for measuring blood pressure. Poiseuille's law قانون ِپوآزوی qânun-e Poiseuille Fr.: loi de Poiseuille In fluid dynamics, the law that the rate of flow of a liquid through a horizontal tube of uniform radius is directly proportional to the pressure of the liquid and the fourth power of the radius of the tube and is inversely proportional to the viscosity of the liquid and the length of the tube. Named after Jean-Louis-Marie Poiseuille (1797-1869), a French physiologist and physician who found the law in 1844; → law. Poisson distribution واباژش ِپوآسون vâbâžeš-e Poisson Fr.: distribution de Poisson A → probability function that characterizes → discrete → random events occurring independently of one another within some definite time or space. It may be regarded as an approximation of the → binomial distribution when the number of events becomes large and the probability of success becomes small. The Poisson distribution is expressed by: f(x) = (λ^xe^-λ)/x!, where λ is the mean number of successes in the interval, e is the base of the → natural logarithm, and x is the number of successes we are interested in. Named after Siméon Denis Poisson (1781-1840), French mathematician, who developed the application of Fourier series to physical problems and made major contributions to the theory of probability and to the calculus of variations; → distribution. Poisson's equation هموگش ِپوآسون hamugeš-e Poisson Fr.: équation de Poisson An equation (∇^2φ = 4πGρ) which relates the gravitational (or electromagnetic) potential to the mass density (or charge density). → Poisson distribution; → equation. ۱) قطبی؛ ۲) پلار 1) qotbi; 2) polâr Fr.: 1) polaire; 2) polar 1) Of or pertaining to the pole of any sphere, a magnet, an electric cell, etc. 2) A subclass of → cataclysmic variables, the prototype being AM Herculis. Polars are short-period systems in which a → late-type → main sequence star transfers mass to a highly magnetized → white dwarf. The strong magnetic field (10-70 million → gauss) prevents the formation of an → accretion disk, locks both stars in synchronous rotation and guides the accreting matter to accretion spots which are the source of intense X-ray radiation (material impacts on to the white dwarf where it is radiated away). 1) Adj. of → pole. 2) From polar(ization) + (st)ar, because of their → circularly polarized light. polar alignment آخطش ِقطبی âxateš-e qotbi Fr.: alignement polaire The process or the state of making a → telescope's → polar axis → parallel to the → Earth's → rotation axis, that is with the → true North or → South → celestial pole. When this is accomplished, the sky's motion can be cancelled out simply by turning the axis (either by hand or with a motor → drive) at the same rate as the rotation of the Earth, but in the opposite direction. → polar; → alignment.
{"url":"https://dictionary.obspm.fr/index.php?showAll=1&&search=P&&formSearchTextfield=&&page=28","timestamp":"2024-11-11T23:55:20Z","content_type":"text/html","content_length":"41367","record_id":"<urn:uuid:58b8a7aa-c0d6-4282-a3c3-63247c099ec2>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00070.warc.gz"}
Set of Integers Symbol Symbol Format Data ℤ Code Point U+2124 TeX \mathbb{Z} The set of integers symbol (ℤ) is used in math to denote the set of integers. The symbol appears as the Latin Capital Letter Z symbol presented in a double-struck typeface. Typically, the symbol is used in an expression like this: The capital Latin letter Z is used in mathematics to represent the set of integers. Usually, the letter is presented with a "double-struck" typeface to indicate that it is the set of integers. The set of rational numbers is denoted with the Latin Capital letter Q presented in a double-struck typeface. The set of real numbers symbol is a Latin capital R presented in double-struck typeface. The set of complex numbers is represented by the Latin capital letter C. The symbol is often presented with a double-struck font face just as with other number sets. The set of complex numbers extends the real numbers.
{"url":"https://wumbo.net/symbols/set-of-integers/","timestamp":"2024-11-04T20:25:42Z","content_type":"text/html","content_length":"13028","record_id":"<urn:uuid:255009b9-b2ae-4a01-a05c-82c1725991e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00819.warc.gz"}
The Existential Risk of Math Errors Mathematical mistake/error-rates limit our understanding of rare risks and ability to defend against them How empirically certain can we be in any use of mathematical reasoning to make empirical claims? In contrast to errors in many other forms of knowledge such as medicine or psychology, which have enormous literatures classifying and quantifying error rates, rich methods of meta-analysis and pooling expert belief, and much one can say about the probability of any result being true, mathematical error has been rarely examined except as a possibility and a motivating reason for research into formal methods. There is little known beyond anecdotes about how often published proofs are wrong, in what ways they are wrong, the impact of such errors, how errors vary by subfield, what methods decrease (or increase) errors, and so on. Yet, mathematics is surely not immune to error, and for all the richness of the subject, mathematicians can usually agree at least informally on what has turned out to be right or wrong^1, or good by other criteria like fruitfulness or beauty. 2004 claims that errors are common but any such analysis would be unedifying: An agent might even have beliefs that logically contradict each other. Mersenne believed that 2^67-1 is a prime number, which was proved false in 1903[121ya], cf. Bell (1951[73ya]). [The factorization, discovered by Cole, is: 193,707,721 1979[45ya], 269–270). The explosion in the number of mathematical publications and research reports has been accompanied by a similar explosion in erroneous claims; on the whole, errors are noted by small groups of experts in the area, and many go unheeded. There is nothing philosophically interesting that can be said about such I disagree. Quantitative approaches cannot capture everything, but why should we believe mathematics is, unlike so many other fields like medicine, uniquely unquantifiable and ineffably inscrutable? As a non-mathematician looking at mathematics largely as a black box, I think such errors are quite interesting, for several reasons: given the extensive role of mathematics throughout the sciences, errors have serious potential impact; but in collecting all the anecdotes I have found, the impact seems skewed towards errors in quasi-formal proofs but not the actual results. One might say that reviewing math errors, the stylized summary is “although the proofs are usually wrong, the results are usually right.” I find this highly surprising and nontrivial, and in striking contrast to other fields I am familiar with, like sociology or psychology, where usually wrong methods lead to wrong results—it is not the case in the Replication Crisis that flaws like p-hacking are merely ‘framing a guilty man’, because followup with more rigorous methods typically shows effects far smaller than measured or predicted, or outright reversal of direction. This difference may tell us something about what it is that mathematicians do subconsciously when they “do math”, or why conjecture resolution times are exponentially-distributed, or what the role of formal methods ought to be, or what we should think about practically important but unresolved problems like P=NP. Beware of bugs in the above code; I have only proved it correct, not tried it. When you have eliminated the impossible, whatever remains is often more improbable than your having made a mistake in one of your impossibility proofs. In some respects, there is nothing to be said; in other respects, there is much to be said. “Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes” discusses a basic issue with existential threats: any useful discussion will be rigorous, hopefully with physics and math proofs; but proofs themselves are empirically unreliable. Given that mathematical proofs have long been claimed to be the most reliable form of epistemology humans know and the only way to guarantee truth^3, this sets a basic upper bound on how much confidence we can put on any belief, and given the lurking existence of systematic biases, it may even be possible for there to be too much evidence for a claim (et al 2016). There are other rare risks, from mental diseases^4 to hardware errors^5 to how to deal with contradictions^6, but we’ll look at mathematical error. When I asked what it was, he said, ‘It is the probability that the test bomb will ignite the whole atmosphere.’ I decided I would check it myself! The next day when he came for the answers I remarked to him, ‘The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels.’ He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, ‘What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?’ I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, ‘Never mind, Hamming, no one will ever blame you.’ This upper bound on our certainty may force us to disregard certain rare risks because the effect of error on our estimates of existential risks is asymmetric: an error will usually reduce the risk, not increase it. The errors are not distributed in any kind of symmetrical around a mean: an existential risk is, by definition, bumping up against the upper bound on possible damage. If we were trying to estimate, say, average human height, and errors were distributed like a bell curve, then we could ignore them. But if we are calculating the risk of a super-asteroid impact which will kill all of humanity, an error which means the super-asteroid will actually kill humanity twice over is irrelevant because it’s the same thing (we can’t die twice); however, the mirror error—the super-asteroid actually killing half of humanity—matters a great deal! XKCD #809 “Los Alamos” How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors: 1. Mistakes where the theorem is still true, but the proof was incorrect (type I) 2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II) Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable^7.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept—but of course the results were some of the greatest mathematical work ever conducted ^8 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.^9 Other cases are more straightforward, with mathematicians publishing multiple proofs/patches^10 or covertly correcting papers^11. Sometimes they make it into textbooks: Carmichael realized that his proof for Carmichael’s totient function conjecture, which is still open, was wrong only after 2 readers saw it in his 1914[110ya] textbook The Theory of Numbers and questioned it. Attempts to formalize results into experimentally-verifiable results (in the case of physics-related math) or machine-checked proofs, or at least some sort of software form, sometimes turns up issues with^12 accepted^13 results^14, although not always important (eg. the correction in 2013). Poincaré points out this mathematical version of the pessimistic induction in “Intuition and Logic in Mathematics”: Strange! If we read over the works of the ancients we are tempted to class them all among the intuitionalists. And yet nature is always the same; it is hardly probable that it has begun in this century to create minds devoted to logic. If we could put ourselves into the flow of ideas which reigned in their time, we should recognize that many of the old geometers were in tendency analysts. Euclid, for example, erected a scientific structure wherein his contemporaries could find no fault. In this vast construction, of which each piece however is due to intuition, we may still today, without much effort, recognize the work of a logician. … What is the cause of this evolution? It is not hard to find. Intuition can not give us rigour, nor even certainty; this has been recognized more and more. Let us cite some examples. We know there exist continuous functions lacking derivatives. Nothing is more shocking to intuition than this proposition which is imposed upon us by logic. Our fathers would not have failed to say: “It is evident that every continuous function has a derivative, since every curve has a tangent.” How can intuition deceive us on this point? … I shall take as second example Dirichlet’s principle on which rest so many theorems of mathematical physics; today we establish it by reasonings very rigorous but very long; heretofore, on the contrary, we were content with a very summary proof. A certain integral depending on an arbitrary function can never vanish. Hence it is concluded that it must have a minimum. The flaw in this reasoning strikes us immediately, since we use the abstract term function and are familiar with all the singularities functions can present when the word is understood in the most general sense. But it would not be the same had we used concrete images, had we, for example, considered this function as an electric potential; it would have been thought legitimate to affirm that electrostatic equilibrium can be attained. Yet perhaps a physical comparison would have awakened some vague distrust. But if care had been taken to translate the reasoning into the language of geometry, intermediate between that of analysis and that of physics, doubtless this distrust would not have been produced, and perhaps one might thus, even today, still deceive many readers not …A first question presents itself. Is this evolution ended? Have we finally attained absolute rigour? At each stage of the evolution our fathers also thought they had reached it. If they deceived themselves, do we not likewise cheat ourselves? We believe that in our reasonings we no longer appeal to intuition; the philosophers will tell us this is an illusion. Pure logic could never lead us to anything but tautologies; it could create nothing new; not from it alone can any science issue. In one sense these philosophers are right; to make arithmetic, as to make geometry, or to make any science, something else than pure logic is Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:^15 If Newton fooled himself, he evidently took with him a succession of readers more than 250 years later. Yet even they should feel no embarrassment. As Augustus De Morgan once wrote, “Everyone makes errors in probabilities, at times, and big ones.” (Graves, 1889[135ya], page 459) Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either. Gian-Carlo Rota^16 Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction. The prevalence of case 1 might lead us to be very pessimistic; case 1, case 2, what’s the difference? We have demonstrated a large error rate in mathematics (and physics is probably even worse off). Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming (1998[26ya]) attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”. (WP mentions as well that “His first mathematics publication was written…after he discovered an incorrect proof in another paper.”) Gian-Carlo Rota gives us an example with Hilbert: Once more let me begin with Hilbert. When the Germans were planning to publish Hilbert’s collected papers and to present him with a set on the occasion of one of his later birthdays, they realized that they could not publish the papers in their original versions because they were full of errors, some of them quite serious. Thereupon they hired a young unemployed mathematician, Olga Taussky-Todd, to go over Hilbert’s papers and correct all mistakes. Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties. At last, on Hilbert’s birthday, a freshly printed set of Hilbert’s collected papers was presented to the Geheimrat. Hilbert leafed through them carefully and did not notice anything.^17 So only one of those papers was irreparable, while all the others were correct and fixable? Rota himself experienced this: Now let us shift to the other end of the spectrum, and allow me to relate another personal anecdote. In the summer of 1979[45ya], while attending a philosophy meeting in Pittsburgh, I was struck with a case of detached retinas. Thanks to Joni’s prompt intervention, I managed to be operated on in the nick of time and my eyesight was saved. On the morning after the operation, while I was lying on a hospital bed with my eyes bandaged, Joni dropped in to visit. Since I was to remain in that Pittsburgh hospital for at least a week, we decided to write a paper. Joni fished a manuscript out of my suitcase, and I mentioned to her that the text had a few mistakes which she could help me fix. There followed twenty minutes of silence while she went through the draft. “ Why, it is all wrong!” she finally remarked in her youthful voice. She was right. Every statement in the manuscript had something wrong. Nevertheless, after laboring for a while, she managed to correct every mistake, and the paper was eventually published. There are two kinds of mistakes. There are fatal mistakes that destroy a theory; but there are also contingent ones, which are useful in testing the stability of a theory. A mathematician of my acquaintance referred me to pg118 of The Axiom of Choice, Jech 1973[51ya]; he had found the sustained effect of the 5 footnotes humorous: 1. The result of Problem 11 contradicts the results announced by Levy [1963[61ya]b]. Unfortunately, the construction presented there cannot be completed. 2. The transfer to ZF was also claimed by Marek [1966[58ya]] but the outlined method appears to be unsatisfactory and has not been published. 3. A contradicting result was announced and later withdrawn by Truss [1970[54ya]]. 4. The example in Problem 22 is a counterexample to another condition of Mostowski, who conjectured its sufficiency and singled out this example as a test case. 5. The independence result contradicts the claim of Felgner [1969[55ya]] that the Cofinality Principle implies the Axiom of Choice. An error has been found by Morris (see Felgner’s corrections to [1969[55ya]]). And referred me also to the entries in the index of Fourier Analysis by Tom Körner concerning the problem of the “pointwise convergence of Fourier series”: □ excessive optimism ☆ Cauchy, 3 ☆ Dedekind, Dirichlet and Weierstrass, 67 ☆ Dirichlet, Gauss, Green, Kelvin and Riemann, 126 ☆ Faraday and Morse^18, 333 ☆ Galois, 38 ☆ Hegel, 370 ☆ Hermite and Poincaré. 42 ☆ Pearson, 424 ☆ Poisson, 119 ☆ Steiner, 163 □ excessive pessimism Some problems are notorious for provoking repeated false proofs. P=NP attracts countless cranks and serious attempts, of course, but also amusing is apparently the Jacobian Conjecture: The (in)famous Jacobian Conjecture was considered a theorem since a 1939[85ya] publication by Keller (who claimed to prove it). Then Shafarevich found a new proof and published it in some conference proceedings paper (in early 1950-ies). This conjecture states that any polynomial map from C^2 to C^2 is invertible if its Jacobian is nowhere zero. In 1960-ies, Vitushkin found a counterexample to all the proofs known to date, by constructing a complex analytic map, not invertible and with nowhere vanishing Jacobian. It is still a main source of embarrassment for Arxiv .org contributors, who publish about 3–5 false proofs yearly. Here is a funny refutation for one of the proofs: “Comment on a Paper by Yucai Su On Jacobian Conjecture (2005-12-30)” The problem of Jacobian Conjecture is very hard. Perhaps it will take human being another 100 years to solve it. Your attempt is noble, Maybe the Gods of Olympus will smile on you one day. Do not be too disappointed. B. Sagre has the honor of publishing three wrong proofs and C. Chevalley mistakes a wrong proof for a correct one in the 1950’s in his Math Review comments, and I.R. Shafarevich uses Jacobian Conjecture (to him it is a theorem) as a fact… This look into the proverbial sausage factory should not come as a surprise to anyone taking an Outside View: why wouldn’t we expect any area of intellectual endeavour to have error rates within a few orders of magnitude as any other area? How absurd to think that the rate might be ~0%; but it’s also a little questionable to be as optimistic as Anders Sandberg’s mathematician friend: “he responded that he thought a far smaller number [1%] of papers in math were this flawed.” Other times, the correct result is known and proven, but many are unaware of the answers^19. The famous Millennium Problems—those that have been solved, anyway—have a long history of failed proofs (Fermat surely did not prove Fermat’s Last Theorem & may have realized this only after boasting^20 and neither did Lindemann^21). What explains this? The guiding factor that keeps popping up when mathematicians make leaps seems to go under the name of ‘elegance’ or mathematical beauty, which widely considered important^22^23^24. This imbalance suggests that mathematicians are quite correct when they say proofs are not the heart of mathematics and that they possess insight into math, a 6^th sense for mathematical truth, a nose for aesthetic beauty which correlates with veracity: they disproportionately go after theorems rather than their negations. Why this is so, I do not know. Outright Platonism like Godel apparently believed in seems unlikely—mathematical expertise resembles a complex skill like chess-playing more than it does a sensory modality like vision. Possibly they have well-developed heuristics and short-cuts and they focus on the subsets of results on which those heuristics work well (the drunk searching under the spotlight), or perhaps they do run full rigorous proofs but are doing so subconsciously and merely express themselves ineptly consciously with omissions and erroneous formulations ‘left as an exercise for the reader’^25. We could try to justify the heuristic paradigm by appealing to as-yet poorly understood aspects of the brain, like our visual cortex: argue that what is going on is that mathematicians are subconsciously doing tremendous amounts of computation (like we do tremendous amounts of computation in a thought as ordinary as recognizing a face), which they are unable to bring up explicitly. So after prolonged introspection and some comparatively simple explicit symbol manipulation or thought, they feel that a conjecture is true and this is due to a summary of said massive computations. Perhaps they are checking many instances? Perhaps they are white-box testing and looking for boundaries? Could there be some sort of “logical probability” where going down possible proof-paths yield probabilistic information about the final target theorem, maybe in some sort of Monte Carlo tree search of proof-trees, in a broader POMDP framework (eg. 2010)?^26 Does sleep serve to consolidate & prune & replay memories of incomplete lines of thought, finetuning heuristics or intuitions for future attacks and getting deeper into a problem (perhaps analogous to expert iteration)? Reading great mathematicians like Terence Tao discuss the heuristics they use on unsolved problems^27, they bear some resemblances to computer science techniques. This would be consistent with a preliminary observation about how long it takes to solve mathematical conjectures: while inference is rendered difficult by the exponential growth in the global population and of mathematicians, the distribution of time-to-solution roughly matches a memoryless exponential distribution (one with a constant chance of solving it in any time period) rather than a more intuitive distribution like a type 1 survivorship curve (where a conjecture gets easier to solve over time, perhaps as related mathematical knowledge accumulates), suggesting a model of mathematical activity in which many independent random attempts are made, each with a small chance of success, and eventually one succeeds. This idea of extensive unconscious computation neatly accords with Poincaré’s account of mathematical creativity in which after long fruitless effort (preparation), he abandoned the problem for a time and engaged in ordinary activities (incubation), is suddenly struck by an answer or insight, and then verifies its correctness consciously. The existence of an incubation effect seems confirmed by psychological studies and particular the observation that incub ation effects increase with the time allowed for incubation & also if the subject does not undertake demanding mental tasks during the incubation period (see 2009), and is consistent with extensive unconscious computation. Some of this computation may happen during sleep; sleep & cognition have long been associated in a murky fashion (“sleep on it”), but it may have to do with reviewing the events of the day & difficult tasks, with relevant memories reinforced or perhaps more thinking going on. I’ve seen more than one suggestion of this, and mathematician Richard K. Guy suggests this as well.^28 (It’s unclear how many results occur this way; Stanislaw Ulam mentions finding one result but never again^29; J Thomas mentions one success but one failure by a teacher^30; R. W. Thomason dreamed of a dead friend making a clearly false claim and published material based on his disproof of the ghost’s claim^31; and Leonard Eugene Dickson reportedly had a useful dream & an early survey of 69 mathematicians yielded 63 nulls, 5 low-quality results, and 1 hit^32.) Heuristics, however, do not generalize, and fail outside their particular domain. Are we fortunate enough that the domain mathematicians work in is—deliberately or accidentally—just that domain in which their heuristics/intuition succeeds? Sandberg suggests not: Unfortunately I suspect that the connoisseurship of mathematicians for truth might be local to their domain. I have discussed with friends about how “brittle” different mathematical domains are, and our consensus is that there are definitely differences between logic, geometry and calculus. Philosophers also seem to have a good nose for what works or doesn’t in their domain, but it doesn’t seem to carry over to other domains. Now moving outside to applied domains things get even trickier. There doesn’t seem to be the same “nose for truth” in risk assessment, perhaps because it is an interdisciplinary, messy domain. The cognitive abilities that help detect correct decisions are likely local to particular domains, trained through experience and maybe talent (ie. some conformity between neural pathways and deep properties of the domain). The only thing that remains is general-purpose intelligence, and that has its own limitations. Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1⁄3^33 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures^34. We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale . So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003[21ya] (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007[17ya] contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg. et al 2014, et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages^35. The doom, however, did not manifest and arguably operating systems & applications are more reliable in the 2000s+ than they were in the 1980^–[10]1990[34ya]s^36 (eg. the general disappearance of the Blue Screen of Death). Users may not appreciate this point, but programmers who happen to think one day of just how the sausage of Gmail is made—how many interacting technologies and stacks of formats and protocols are involved—may get the shakes and wonder how it could ever work, much less be working at that moment. The answer is not really clear: it seems to be a combination of abundant computing resources driving down per-line error rates by avoiding optimization, modularization reducing interactions between lines, greater use of testing invoking an adversarial attitude to one’s code, and a light sprinkling of formal methods & static checks^37. While hopeful, it’s not clear how many of these would apply to existential risks: how does one use randomized testing on theories of existential risk, or tradeoff code clarity for computing So we might forgive case 1 errors entirely: if a community of mathematicians take an ‘incorrect’ proof about a particular existential risk and ratify it (either by verifying the proof subconsciously or seeing what their heuristics say), it not being written out because it would be too tedious^38, then we may be more confident in it^39 than lumping the two error rates together. Case 2 errors are the problem, and they can sometimes be systematic. Most dramatically, when an entire group of papers with all their results turn out to be wrong since they made a since-disproved assumption: In the 1970s and 1980s, mathematicians discovered that framed manifolds with Arf-Kervaire invariant equal to 1—oddball manifolds not surgically related to a sphere—do in fact exist in the first five dimensions on the list: 2, 6, 14, 30 and 62. A clear pattern seemed to be established, and many mathematicians felt confident that this pattern would continue in higher dimensions…Researchers developed what Ravenel calls an entire “cosmology” of conjectures based on the assumption that manifolds with Arf-Kervaire invariant equal to 1 exist in all dimensions of the form 2^n − 2. Many called the notion that these manifolds might not exist the “Doomsday Hypothesis,” as it would wipe out a large body of research. Earlier this year, Victor Snaith of the University of Sheffield in England published a book about this research, warning in the preface, “…this might turn out to be a book about things which do not exist.” Just weeks after Snaith’s book appeared, Hopkins announced on April 21 that Snaith’s worst fears were justified: that Hopkins, Hill and Ravenel had proved that no manifolds of Arf-Kervaire invariant equal to 1 exist in dimensions 254 and higher. Dimension 126, the only one not covered by their analysis, remains a mystery. The new finding is convincing, even though it overturns many mathematicians’ expectations, Hovey said.^40 The parallel postulate is another fascinating example of mathematical error of the second kind; its history is replete with false proofs even by greats like Lagrange (on what strike the modern reader as bizarre grounds)^41, self-deception, and misunderstandings—Giovanni Girolamo Saccheri developed a non-Euclidean geometry flawlessly but concluded it was flawed: The second possibility turned out to be harder to refute. In fact he was unable to derive a logical contradiction and instead derived many non-intuitive results; for example that triangles have a maximum finite area and that there is an absolute unit of length. He finally concluded that: “the hypothesis of the acute angle is absolutely false; because it is repugnant to the nature of straight lines”. Today, his results are theorems of hyperbolic geometry. We could look upon Type II errors as having a benevolent aspect: they show both that our existing methods are too weak & informal and that our intuition/heuristics break down at it—implying that all previous mathematical effort has been systematically misled in avoiding that area (as empty), and that there is much low-hanging fruit. (Consider how many scores or hundreds of key theorems were proven by the very first mathematicians to work in the non-Euclidean geometries!) Should such widely-believed conjectures as P ≠ NP^42 or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, entire textbook chapters (and perhaps textbooks) would disappear—and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining: it is not what you don’t know that’s dangerous, but what you know that ain’t so. “A credo of sorts”; Vaughan Jones (Truth in Mathematics, 1998[26ya]), pg208–209: Proofs are indispensable, but I would say they are necessary but not sufficient for mathematical truth, at least truth as perceived by the individual. To justify this attitude let me invoke two experiences of current mathematics, which very few mathematicians today have escaped. The first is computer programming. To write a short program, say 100 lines of C code, is a relatively painless experience. The debugging will take longer than the writing, but it will not entail suicidal thoughts. However, should an inexperienced programmer undertake to write a slightly longer program, say 1000[1,024ya] lines, distressing results will follow. The debugging process becomes an emotional nightmare in which one will doubt one’s own sanity. One will certainly insult the compiler in words that are inappropriate for this essay. The mathematician, having gone through this torture, cannot but ask: “Have I ever subjected the proofs of any of my theorems to such close scrutiny?” In my case at least the answer is surely “no”. So while I do not doubt that my proofs are correct (at least the important ones), my belief in the results needs bolstering. Compare this with the debugging process. At the end of debugging we are happy with our program because of the consistency of the output it gives, not because we feel we have proved it correct—after all we did that at least twenty times while debugging and we were wrong every time. Why not a twenty-first? In fact we are acutely aware that our poor program has only been tested with a limited set of inputs and we fully expect more bugs to manifest themselves when inputs are used which we have not yet considered. If the program is sufficiently important, it will be further debugged in the course of time until it becomes secure with respect to all inputs. (With much larger programs this will never happen.) So it is with our theorems. Although we may have proofs galore and a rich surrounding structure, if the result is at all difficult it is only the test of time that will cause acceptance of the “truth” of the result. The second experience concerning the need for supplements to proof is one which I used to dislike intensely, but have come to appreciate and even search for. It is the situation where one has two watertight, well-designed arguments—that lead inexorably to opposite conclusions. Remember that research in mathematics involves a foray into the unknown. We may not know which of the two conclusions is correct or even have any feeling or guess. Proof at this point is our only arbiter. And it seems to have let us down. I have known myself to be in this situation for months on end. It induces obsessive and anti-social behavior. Perhaps we have found an inconsistency in mathematics. But no, eventually some crack is seen in one of the arguments and it begins to look more and more shaky. Eventually we kick ourselves for being so utterly stupid and life goes on. But it was no tool of logic that saved us. The search for a chink in the armour often involved many tricks including elaborate thought experiments and perhaps computer calculations. Much structural understanding is created, which is why I now so value this process. One’s feeling of having obtained truth at the end is approaching the absolute. Though I should add that I have been forced to reverse the conclusion on occasions… I have never written an equation or line of code that I was 100% confident of, or which I thought had less than a 1-in-trillions chance of it being wrong in some important way. Software & real-world systems are too complex & fragile. Every part of my understanding, the hardware, or the real-world context is less reliable than 1-in-trillions. Let’s consider potential problems with our understanding of even the most trivial seeming arithmetic comparison checking that ‘x + x = 2x’. Consider a simple-seeming line of conditional code for the arithmetical tautology: x + x == 2*x. How could this possibly ever go wrong? Well… • Where did you initialize x? Was it ever initialized to a non-null value? (Or has it been working accidentally because it uses uninitialized memory which just happened to have a workable value?) • Is this comparison by reference, equality, hash, or some other way entirely? • Which integer type is this? Does that integer type overflow? □ In some languages, x might be a string being parsed as a number. (Javascript is infamous for this due to its type coercion and redefining operators; this will evaluate to true: x = "1"; 2*x = = 2 && x + x == "11";.) □ In highly dynamic or object-oriented languages, +, ==, and * could all have been redefined per x and mean… just about anything, and do anything as side-effects of methods like getters. • Does multiplying integers like this potentially trigger undefined behavior and arbitrary compiler ‘optimizations’? □ If it can never overflow because it’s a “big int” with arbitrary-precision arithmetic, how much RAM does this allocate? What happens if the result is larger than fits in RAM? (How would evaluation order like laziness affect this?) □ How much do you know about your big-integer library to begin with? (They do have bugs, like all software.) • If this is floating point (do you know for sure?), won’t this usually be false at larger/smaller numbers? □ What about floating point rounding or other exotic modes? □ Or multiplying special values like NaN or +Inf vs −Inf? □ If you know about all this and really did want that… how sure are you that the compiler isn’t incorrectly equationally-reasoning that they are equal and rewriting them behind your back? • What is the operator precedence of this code? □ By the way, are you sure it’s a conditional at all? Perhaps it was parsed as (x + x == 2) * x? • What is the evaluation order of this code? • This is serial-threaded code, right? No parallelism anywhere? If there is… □ Trick question: you thought there wasn’t, but there was anyway because all systems are inherently parallel now. So there are dangers around cache coherency & races, leading to many classes of attacks/errors like Spectre. And x here can change: the direct way of computing it would involve at least 5 values being stored & referenced somewhere. (The 3 written x in the equation, then the sum of two, and then the multiplied version.) • How likely is your computation to be corrupted or subverted by an attacker doing something like a buffer overflow attack or a row hammer attack, which winds up clobbering your x? • What happens if the computer halts or freezes or is DoSed or the power goes out halfway through the computation? • Why do you believe the hardware will always store & execute everything correctly? □ What are the odds that the hardware will be hit by a cosmic ray during any of these operations? Even ECC RAM is increasingly unreliable. □ Or that your RAM has a permanent fault in it? (For several years, compiling this website would occasionally result in strange segfaults in apparently correct regexp code; this turned out to be a bad RAM chip where ordinary RAM use simply didn’t stress it enough.) □ What are the odds that the CPU core in question is sometimes unable to add or multiply correctly? (If you’re a hyperscaler, they exist in your fleet of servers somewhere!) □ How do you know all instances of x were never corrupted anywhere during storage or transmission? I can safely say that in my programming life, I have written many fewer than trillions of lines of code, and I have made many more of these errors than 0. So I infer that for even the simplest-seeming code, I am unable to write code merely as reliable as a 1-in-trillions error rate. 1. Examples like the ABC conjecture being the exceptions that prove the rule.↩︎ 2. Citations: ☆ Bell, E.T.: 1951[73ya], “The Queen of Mathematics”, reprinted in J. R. Newman (ed.), The World of Mathematics, 1956 ☆ De Milo, R. Lipton, and A. Perlis: 1979[45ya], “Social Processes and Proofs of Theorems and Programs”, Communication of the ACM, Vol. 22, No. 5. Reprinted in T. Tymozcko (ed.), New Directions in the Philosophy of Mathematics, Princeton University Press (1998[26ya]). Page numbers refer to the book 3. As a pragmatist & empiricist, I must have the temerity to disagree with the likes of Plato about the role of proof: if mathematical proof truly was so reliable, then I would have little to write about in this essay. However rigorous logic is, it is still created & used by fallible humans. There is no ‘Platonia’ we can tap into to obtain transcendent truth.↩︎ 4. There are various delusions (eg. Cotard delusion), false memory syndromes, compulsive lying (pseudologia fantastica), disorders provoking confabulation such as the general symptom of anosognosia; in a dramatic example of how the mind is what the brain does, some anosognosia can be temporarily cured by squirting cold water in an ear; from “The Apologist and the Revolutionary”: Take the example of the woman discussed in Lishman’s Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn’t actually her arm, it was her daughter’s. Why was her daughter’s arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter’s hand? The patient said her daughter had borrowed it. Where was the patient’s arm? The patient “turned her head and searched in a bemused way over her left shoulder”…In any case, a patient who has been denying paralysis for weeks or months will, upon having cold water placed in the ear, admit to paralysis, admit to having been paralyzed the past few weeks or months, and express bewilderment at having ever denied such an obvious fact. And then the effect wears off, and the patient not only denies the paralysis but denies ever having admitted to it. 5. “Mathematics in the Age of the Turing Machine”, 2014: As an example, we will calculate the expected number of soft errors in one of the mathematical calculations of §1.17. The Atlas Project calculation of the E[8] character table was a 77 hour calculation that required 64 gigabytes RAM [Ada07]. Soft errors rates are generally measured in units of failures-in-time (FIT). One FIT is defined as one error per 10^9 hours of operation. If we assume a soft error rate of 10^3 FIT per Mbit, (which is a typical rate for a modern memory device operating at sea level 15 [Tez04]), then we would expect there to be about 40 soft errors in memory during the calculation: This example shows that soft errors can be a realistic concern in mathematical calculations. (As added confirmation, the E[8] calculation has now been repeated about 5 times with identical results.)…The soft error rate is remarkably sensitive to elevation; a calculation in Denver produces about three times more soft errors than the same calculation on identical hardware in Boston…Soft errors are depressing news in the ultra-reliable world of proof assistants. Alpha particles rain on perfect and imperfect software alike. In fact, because the number of soft errors is proportional to the execution time of a calculation, by being slow and methodical, the probability of a soft error during a calculation inside a proof assistant can be much higher than the probability when done outside. 6. Most/all math results require their system to be consistent; but this is one particular philosophical view. Ludwig Wittgenstein, in Remarks on the Foundations of Mathematics: If a contradiction were now actually found in arithmetic—that would only prove that an arithmetic with such a contradiction in it could render very good service; and it would be better for us to modify our concept of the certainty required, than to say it would really not yet have been a proper arithmetic. Saul Kripke, reconstructing a Wittgensteinian skeptical argument, points out one way to react to such issues: A skeptical solution of a philosophical problem begins… by conceding that the skeptic’s negative assertions are unanswerable. Nevertheless our ordinary practice or belief is justified because-contrary appearances notwithstanding-it need not require the justification the sceptic has shown to be untenable. And much of the value of the sceptical argument consists precisely in the fact that he has shown that an ordinary practice, if it is to be defended at all, cannot be defended in a certain way. 7. Lipton lists several: 1. the transcendality of 2^√2 and e^π: resolved as predicted, but >78 years faster than he predicted. 2. proof of the consistency of arithmetic: prediction that arithmetic was consistent and this was provable was falsified (Goedel showing it is unprovable) One could add to this Hilbert list: the continuum hypothesis (independent); and the algorithm for solving Diophantines (impossible to give, to the surprise of Georg Kreisel who said reviewing one of the papers “Well, that’s not the way it’s gonna go.”). From MathOverflow: Hilbert’s 21^st problem, on the existence of linear DEs with prescribed monodromy group, was for a long time thought to have been solved by Plemelj in 1908[116ya]. In fact, Plemelj died in 1967[57ya] still believing he had solved the problem. However, in 1989[35ya], Bolibruch discovered a counterexample. Details are in the book The Riemann-Hilbert Problem by Anosov and Bolibruch (Vieweg-Teubner 1994[30ya]), and a nice popular recounting of the story is in Ben Yandell’s The Honors Class (A K Peters 2002[22ya]). Lipton also provides as examples: □ Warren Hirsch’s polytope conjecture □ Subhash Khot’s conjecture that his Unique Games problem is NP-hard (not falsified but substantially weakened) □ the search for a proof of Euclid’s fifth postulate (covered already) □ George Pólya’s prime factorization conjecture □ Euler’s generalization of Fermat’s last theorem □ Virginia Ragsdale’s combinatorial conjecture, related to a Hilbert problem □ Erik Zeeman’s knot-tying conjecture; the resolution is too good to not quote: After trying to prove this for almost ten years, one day he worked on the opposite direction, and solved it in hours. □ a von Neumann topological conjecture □ conventional wisdom in complexity theory “that bounded-width programs could not compute the majority function, and many other functions” □ ditto, “Most believed that nondeterministic logspace (NLOG) is not closed under complement.” □ Béla Julesz’s human vision statistics conjecture 8. John von Neumann, “The Mathematician” (1947[77ya]): That Euclid’s axiomatization does at some minor points not meet the modern requirements of absolute axiomatic rigour is of lesser importance in this respect…The first formulations of the calculus were not even mathematically rigorous. An inexact, semi-physical formulation was the only one available for over a hundred and fifty years after Newton! And yet, some of the most important advances of analysis took place during this period, against this inexact, mathematically inadequate background! Some of the leading mathematical spirits of the period were clearly not rigorous, like Euler; but others, in the main, were, like Gauss or Jacobi. The development was as confused and ambiguous as can be, and its relation to empiricism was certainly not according to our present (or Euclid’s) ideas of abstraction and rigour. Yet no mathematician would want to exclude it from the fold—that period produced mathematics as first-class as ever existed! And even after the reign of rigour was essentially re-established with Cauchy, a very peculiar relapse into semi-physical methods took place with Riemann. 9. Stephen Wolfram mentions a recent example I hadn’t run into used before, in a long discussion of expanding Mathematica to automatically incorporate old papers’ results Of course, there are all sorts of practical issues. Newer papers are predominantly in T[e]X, so it’s not too difficult to pull out theorems with all their mathematical notation. But older papers need to be scanned, which requires math OCR, which has yet to be properly developed. Then there are issues like whether theorems stated in papers are actually valid. And even whether theorems that were considered valid, say, 100 years ago are still considered valid today. For example, for continued fractions, there are lots of pre-1950 theorems that were successfully proved in their time, but which ignore branch cuts, and so wouldn’t be considered correct today. And in the end of course it requires lots of actual, skilled mathematicians to guide the curation process, and to encode theorems. But in a sense this kind of mobilization of mathematicians is not completely unfamiliar; it’s something like what was needed when Zentralblatt was started in 1931[93ya], or Mathematical Reviews in 1941[83ya]. 10. “Desperately seeking mathematical proof”, Melvyn B. Nathanson 2009[15ya]: The history of mathematics is full of philosophically and ethically troubling reports about bad proofs of theorems. For example, the fundamental theorem of algebra states that every polynomial of degree n with complex coefficients has exactly n complex roots. D’Alembert published a proof in 1746[278ya], and the theorem became known “D’Alembert’s theorem”, but the proof was wrong. Gauss published his first proof of the fundamental theorem in 1799[225ya], but this, too, had gaps. Gauss’s subsequent proofs, in 1816[208ya] and 1849[175ya], were OK. It seems to have been hard to determine if a proof of the fundamental theorem of algebra was correct. Why? Poincaré was awarded a prize from King Oscar II of Sweden and Norway for a paper on the three-body problem, and his paper was published in Acta Mathematica in 1890[134ya]. But the published paper was not the prize-winning paper. The paper that won the prize contained serious mistakes, and Poincaré and other mathematicians, most importantly, Mittag-Leffler, engaged in a conspiracy to suppress the truth and to replace the erroneous paper with an extensively altered and corrected one. The three-body problem is fascinating as it gives us an example of a bad proof by Poincaré & attempt to cover it up, but also an example of an impossibility proof: Bruns & Poincaré proved in 1887 [137ya] that the usual approaches could not work, typically interpreted as the 3 or n-body problem being unsolvable. Except in 1906[118ya]/1909[115ya], Karl F. Sundman provided an (impractical) algorithm using different techniques to solve it. See “The Solution of the n-body Problem” & “A Visit to the Newtonian N-body Problem via Elementary Complex Variables”.↩︎ 11. “Mathematics in the Age of the Turing Machine”, 2014: Why use computers to verify mathematics? The simple answer is that carefully implemented proof checkers make fewer errors than mathematicians (except J.-P. Serre). Incorrect proofs of correct statements are so abundant that they are impossible to catalogue. Ralph Boas, former executive editor of Math Reviews, once remarked that proofs are wrong “half the time” [Aus08]. Kempe’s claimed proof of the four-color theorem stood for more than a decade before Heawood refuted it [Mac01, p. 115]. “More than a thousand false proofs [of Fermat’s Last Theorem] were published between 1908[116ya] and 1912[112ya] alone” [Cor10]. Many published theorems are like the hanging chad ballots of the 2000[24ya] U.S. presidential election, with scrawls too ambivalent for a clear yea or nay. One mathematician even proposed to me that a new journal is needed that unlike the others only publishes reliable results. Euclid gave us a method, but even he erred in the proof of the very first proposition of the Elements when he assumed without proof that two circles, each passing through the other’s center, must intersect. The concept that is needed to repair the gap in Euclid’s reasoning is an intermediate value theorem. This defect was not remedied until Hilbert’s Foundations of Geometry. Examples of widely accepted proofs of false or unprovable statements show that our methods of proof-checking are far from perfect. Lagrange thought he had a proof of the parallel postulate, but had enough doubt in his argument to withhold it from publication. In some cases, entire schools have become sloppy, such as the Italian school of algebraic geometry or real analysis before the revolution in rigor towards the end of the nineteenth century. Plemelj’s 1908[116ya] accepted solution to Hilbert’s 21^st problem on the monodromy of linear differential equations was refuted in 1989[35ya] by Bolibruch. Auslander gives the example of a theorem^12 published by Waraskiewicz in 1937[87ya], generalized by Choquet in 1944[80ya], then refuted with a counterexample by Bing in 1948[76ya] [Aus08]. Another example is the approximation problem for Sobolev maps between two manifolds [Bet91], which contains a faulty proof of an incorrect statement. The corrected theorem appears in [HL03]. Such examples are so plentiful that a Wiki page has been set up to classify them, with references to longer discussions at Math Overflow [Wik11], [Ove09], [Ove10]. 12. “Computational Discovery in Pure Mathematics”, Simon Colton 2007[17ya]: A more recent example was the discovery that Andrew Wiles’ original proof of Fermat’s Last Theorem was flawed (but not, as it turned out, fatally flawed, as Wiles managed to fix the problem (Singh, 1997[27ya]))…More recently, Larry Wos has been using Otter to find smaller proofs of theorems than the current ones. To this end, he uses Otter to find more succinct methods than those originally proposed. This often results in detecting double negations and removing unnecessary lemmas, some of which were thought to be indispensable. (Wos, 1996[28ya]) presents a methodology using a strategy known as resonance to search for elegant proofs with Otter. He gives examples from mathematics and logic, and also argues that this work also implications for other fields such as circuit design. (Fleuriot & Paulson, 1998[26ya]) have studied the geometric proofs in Newton’s Principia and investigated ways to prove them automatically with the Isabelle interactive theorem prover (Paulson, 1994[30ya]). To do this, they formalized the Principia in both Euclidean geometry and non-standard analysis. While working through one of the key results (proposition 11 of book 1, the Kepler problem) they discovered an anomaly in the reasoning. Newton was appealing to a cross-multiplication result which wasn’t true for infinitesimals or infinite numbers. Isabelle could therefore not prove the result, but Fleuriot managed to derive an alternative proof of the theorem that the system found acceptable. 13. Colton 2007[17ya]: “For example, Heawood discovered a flaw in Kempe’s 1879[145ya] proof of the four colour theorem,^2 which had been accepted for 11 years.” It would ultimately be proved with a computer in 1976[48ya]—maybe.↩︎ 14. 2014: Theorems that are calculations or enumerations are especially prone to error. Feynman laments, “I don’t notice in the morass of things that something, a little limit or sign, goes wrong… . . I have mathematically proven to myself so many things that aren’t true” [Fey00, p. 885]. Elsewhere, Feynman describes two teams of physicists who carried out a two-year calculation of the electron magnetic moment and independently arrived at the same predicted value. When experiment disagreed with prediction, the discrepancy was eventually traced to an arithmetic error made by the physicists, whose calculations were not so independent as originally believed [Fey85, p. 117]. Pontryagin and Rokhlin erred in computing stable homotopy groups of spheres. Little’s tables of knots from 1885[139ya] contains duplicate entries that went undetected until 1974[50ya]. In enumerative geometry, in 1848[176ya], Steiner counted 7776 plane conics tangent to 5 general plane conics, when there are actually only 3264. One of the most persistent blunders in the history of mathematics has been the misclassification (or misdefinition) of convex Archimedean polyhedra. Time and again, the pseudo rhombic cuboctahedron has been overlooked or illogically excluded from the classification (Figure 21) [Grue11]. 15. Stigler is also kind in 2007, emphasizing that while many of the statisticians involved in maximum likelihood incorrectly proved false claims, they were very productive mistakes.↩︎ 16. “Fine Hall in its golden age: Remembrances of Princeton in the early fifties”, Indiscrete Thoughts↩︎ 17. “Ten Lessons I wish I had been Taught”, Gian-Carlo 1996↩︎ 18. There are 2 20^th century mathematicians, born too late to work with Faraday, and the telegraph inventor Samuel Morse who while overlapping with Faraday, has a Wikipedia entry mentioning no work in mathematics; I do not know which Morse may be meant.↩︎ 19. An example of this would be “An Enduring Error”, Branko Grünbaum: Mathematical truths are immutable, but mathematicians do make errors, especially when carrying out non-trivial enumerations. Some of the errors are “innocent”—plain mistakes that get corrected as soon as an independent enumeration is carried out. For example, Daublebsky [14] in 1895[129ya] found that there are precisely 228 types of configurations (123), that is, collections of 12 lines and 12 points, each incident with three of the others. In fact, as found by Gropp [19] in 1990[34ya], the correct number is 229. Another example is provided by the enumeration of the uniform tilings of the 3-dimensional space by Andreini [1] in 1905[119ya]; he claimed that there are precisely 25 types. However, as shown [20] in 1994[30ya], the correct number is 28. Andreini listed some tilings that should not have been included, and missed several others—but again, these are simple errors easily corrected…It is surprising how errors of this type escape detection for a long time, even though there is frequent mention of the results. One example is provided by the enumeration of 4-dimensional simple polytopes with 8 facets, by Brückner [7] in 1909[115ya]. He replaces this enumeration by that of 3-dimensional “diagrams” that he interpreted as Schlegel diagrams of convex 4-polytopes, and claimed that the enumeration of these objects is equivalent to that of the polytopes. However, aside from several “innocent” mistakes in his enumeration, there is a fundamental error: While to all 4-polytopes correspond 3-dimensional diagrams, there is no reason to assume that every diagram arises from a polytope. At the time of Brückner’s paper, even the corresponding fact about 3-polyhedra and 2-dimensional diagrams has not yet been established—this followed only from Steinitz’s characterization of complexes that determine convex polyhedra [45], [46]. In fact, in the case considered by Brückner, the assumption is not only unjustified, but actually wrong: One of Brückner’s polytopes does not exist, see [25]. …Polyhedra have been studied since antiquity. It is, therefore, rather surprising that even concerning some of the polyhedra known since that time there is a lot of confusion, regarding both terminology and essence. But even more unexpected is the fact that many expositions of this topic commit serious mathematical and logical errors. Moreover, this happened not once or twice, but many times over the centuries, and continues to this day in many printed and electronic publications; the most recent case is in the second issue for 2008[16ya] of this journal…With our understandings and exclusions, there are fourteen convex polyhedra that satisfy the local criterion and should be called “Archimedean”, but only thirteen that satisfy the global criterion and are appropriately called “uniform” (or “semiregular”). Representatives of the thirteen uniform convex polyhedra are shown in the sources mentioned above, while the fourteenth polyhedron is illustrated in Figure 1. It satisfies the local criterion but not the global one, and therefore is—in our terminology—Archimedean but not uniform. The history of the realization that the local criterion leads to fourteen polyhedra will be discussed in the next section; it is remarkable that this development occurred only in the 20^th century. This implies that prior to the twentieth century all enumerations of the polyhedra satisfying the local criterion were mistaken. Unfortunately, many later enumerations make the same error. 20. Dana Mackinzie, The Universe in Zero Words: The Story of Mathematics as Told through Equations (as quoted by John D. Cook): Fermat repeatedly bragged about the n = 3 and n = 4 cases and posed them as challenges to other mathematicians … But he never mentioned the general case, n = 5 and higher, in any of his letters. Why such restraint? Most likely, Weil argues, because Fermat had realized that his “truly wonderful proof” did not work in those cases…Every mathematician has had days like this. You think you have a great insight, but then you go out for a walk, or you come back to the problem the next day, and you realize that your great idea has a flaw. Sometimes you can go back and fix it. And sometimes you can’t. 21. From MathWorld, “Fermat’s Last Theorem”: Much additional progress was made over the next 150 years, but no completely general result had been obtained. Buoyed by false confidence after his proof that pi is transcendental, the mathematician Lindemann proceeded to publish several proofs of Fermat’s Last Theorem, all of them invalid (Bell 1937[87ya], pp. 464–465). A prize of 100000 German marks, known as the Wolfskehl Prize, was also offered for the first valid proof (Ball and Coxeter 1987[37ya], p. 72; Barner 1997[27ya]; Hoffman 1998[26ya], pp. 193–194 and 199). A recent false alarm for a general proof was raised by Y. Miyaoka (Cipra 1988[36ya]) whose proof, however, turned out to be flawed. Other attempted proofs among both professional and amateur mathematicians are discussed by vos Savant (1993[31ya]), although vos Savant erroneously claims that work on the problem by Wiles (discussed below) is invalid. 22. To take a random example (which could be multiplied indefinitely); from Gödel and the Nature of Mathematical Truth: A Talk with Rebecca Goldstein (6.8.2005[19ya]): Einstein told the philosopher of science Hans Reichenbach that he’d known even before the solar eclipse of 1918[106ya] supported his general theory of relativity that the theory must be true because it was so beautiful. And Hermann Weyl, who worked on both relativity theory and quantum mechanics, said “My work always tried to unite the true with the beautiful, but when I had to choose one or the other, I usually chose the beautiful.”…Mathematics seems to be the one place where you don’t have to choose, where truth and beauty are always united. One of my all-time favorite books is A Mathematician’s Apology. G.H. Hardy tries to demonstrate to a general audience that mathematics is intimately about beauty. He gives as examples two proofs, one showing that the square root of 2 is irrational, the other showing that there’s no largest prime number. Simple, easily graspable proofs, that stir the soul with wonder. 23. Nathanson 2009[15ya] claims the opposite: Many mathematicians have the opposite opinion; they do not or cannot distinguish the beauty or importance of a theorem from its proof. A theorem that is first published with a long and difficult proof is highly regarded. Someone who, preferably many years later, finds a short proof is “brilliant.” But if the short proof had been obtained in the beginning, the theorem might have been disparaged as an “easy result.” Erdős was a genius at finding brilliantly simple proofs of deep results, but, until recently, much of his work was ignored by the mathematical 24. From “Aesthetics as a Liberating Force in Mathematics Education?”, by Nathalie Sinclair (reprinted in The Best Writing on 2010, ed. Mircea Pitici); pg208: There is a long tradition in mathematics of describing proofs and theorems in aesthetic terms, often using words such as ‘elegance’ and ‘depth’. Further, mathematicians have also argued that their subject is more akin to an art than it is to a science (see Hardy, 1967; Littlewood, 1986[38ya]; Sullivan 1925[99ya]/1956[68ya]), and, like the arts, ascribe to mathematics aesthetic goals. For example, the mathematician W. Krull (1930/1987) writes: “the primary goals of the mathematician are aesthetic, and not epistemological” (p. 49). This statement seems contradictory with the oft-cited concern of mathematics with finding or discovering truths, but it emphasises the fact that the mathematician’s interest is in expressing truth, and in doing so in clever, simple, succinct ways. While Krull focuses on mathematical expression, the mathematician H. Poincaré (1908/1966) concerns himself with the psychology of mathematical invention, but he too underlines the aesthetic dimension of mathematics, not the logical. In Poincaré’s theory, a large part of a mathematician’s work is done at the subconscious level, where an aesthetic sensibility is responsible for alerting the mathematicians to the most fruitful and interesting of ideas. Other mathematicians have spoken of this special sensibility as well and also in terms of the way it guides mathematicians to choose certain problems. This choice is essential in mathematic given that there exists no external reality against which mathematicians can decide which problems or which branches of mathematics are important (see von Neumann, 1947[77ya] [“The Mathematician”]): the choice involves human values and preference—and, indeed, these change over time, as exemplified by the dismissal of geometry by some prominent mathematicians in the early 20^th century (see Whiteley, 1999). ☆ Littlewood, 1986[38ya]: “The mathematician’s art of work”; in B. Bollobas (ed.), Littlewood’s miscellany, Cambridge University press ☆ Sullivan 1925[99ya]/1956[68ya]: “Mathematics as an art”; in J. Newman (ed.), The world of mathematics, vol 3 (p 2015^–[6]2021) 25. From pg 211–212, Sinclair 2009[15ya]: The survey of mathematicians conducted by Wells (1990[34ya]) provides a more empirically-based challenge to the intrinsic view of the mathematical aesthetic. Wells obtained responses from over 80 mathematicians, who were asked to identify the most beautiful theorem from a given set of 24 theorems. (These theorems were chosen because they were ‘famous’, in the sense that Wells judged them to be well-known by most mathematicians, and of interest to the discipline in general, rather than to a particular subfield.) Wells finds that the mathematicians varied widely in their judgments. More interestingly, in explaining their choices, the mathematicians revealed a wide range of personal responses affecting their aesthetic responses to the theorems. Wells effectively puts to rest the belief that mathematicians have some kind of secret agreement on what counts as beautiful in mathematics…Burton’s (2004[20ya]) work focuses on the practices of mathematicians and their understanding of those practices. Based on extensive interviews with a wide range of mathematicians…She points out that mathematicians range on a continuum from unimportant to crucial in terms of their positionings on the role of the aesthetic, with only 3 of the 43 mathematicians dismissing its importance. For example, one said “Beauty doesn’t matter. I have never seen a beautiful mathematical paper in my life” (p. 65). Another mathematician was initially dismissive about mathematical beauty but later, when speaking about the review process, said: “If it was a very elegant way of doing things, I would be inclined to forgive a lot of faults” (p. 65). 26. The Silver & Veness 2010[14ya] adaptation of MCTS to POMDPs has some nice qualitative properties for describing human insight. It handles the uncertainty of hidden information by simply sampling a possible value from the prior, and then treating it as a known fact & searching/planning normally. So for example, it appears to be able to explain the many reports of ‘reversals’, where someone attacks a problem for years, getting nowhere, and then one day, frustrated or on a whim, decides to try assuming the opposite is true and solves it that day: this would correspond to a prior of ~100% on the original framing, sampling from that, searching futilely every time, until finally randomly drawing the negation, searching, and succeeding.↩︎ 27. Tao left a lengthy comment on a previously linked Lipton post: It is possible to gather reasonably convincing support for a conjecture by a variety of means, long before it is actually proven, though many mathematicians are reluctant to argue too strongly based on such support due to the lack of rigour or the risk of embarrassment in hindsight. Examples of support include: ☆ Numerical evidence; but one has to be careful in situations where the null hypothesis would also give comparable numerical evidence. The first ten trillion zeroes of zeta on the critical line is, in my opinion, only mild evidence in favour of RH (the null hypothesis may be, for instance, that the zeroes go haywire once log log t becomes sufficiently large); but the numerical data on spacings of zeroes is quite convincing evidence for the GUE hypothesis, in my view. (It is a priori conceivable that the spacings are distributed according to GUE plus another correction that dominates when log log t (say) is large, but this begins to strain Occam’s razor.) ☆ Non-trivial special cases. But it depends on how “representative” one believes the special cases to be. For instance, if one can verify low-dimensional cases of a conjecture that is true in high dimensions, this is usually only weak (but not entirely insignificant) evidence, as it is possible that there exist high-dimensional pathologies that sink the conjecture but cannot be folded up into a low-dimensional situation. But if one can do all odd-dimensional cases, and all even-dimensional cases up to dimension 8 (say), then that begins to look more ☆ Proofs of parallel, analogous, or similar conjectures. Particularly if these proofs were non-trivial and led to new insights and techniques. RH in function fields is a good example here; it raises the hope of some sort of grand unified approach to GRH that somehow handles all number fields (or some other general class) simultaneously. ☆ Converse of the conjecture is provable, and looks “optimal” somehow. One might be able to construct a list of all obvious examples of objects with property X, find significant difficulty extending the list, and then conjecture that this is list is complete. This is a common way to make conjectures, but can be dangerous, as one may simply have a lack of imagination. So this is thin evidence by itself (many false conjectures have arisen from this converse-taking method), but it does carry a little bit of weight once combined with other strands of ☆ Conjecture is ambitious and powerful, and yet is not immediately sunk by the obvious consistency checks. This is vaguely analogous to the concept of a “falsifiable theory” in science. A strong conjecture could have many powerful consequences in a variety of disparate areas of mathematics—so powerful, in fact, that one would not be surprised that they could be disproven with various counterexamples. But surprisingly, when one checks the cases that one does understand quite well, the conjecture holds up. A typical example here might include a very general conjectured identity which, when specialised to various well-understood special cases, become a provable identity—but with the identity in each special case being provable by very different methods, and the connection between all the identities being mysterious other than via the conjecture. The general conjecture that the primes behave pseudorandomly after accounting for small moduli is an example of such a conjecture; we usually can’t control how the primes behave, but when we can, the pseudorandomess heuristic lines up perfectly. ☆ Attempts at disproof run into interesting obstacles. This one is a bit hard to formalise, but sometimes you can get a sense that attempts to disprove a conjecture are failing not due to one’s own lack of ability, or due to accidental contingencies, but rather due to “enemy activity”; some lurking hidden structure to the problem, corners of which emerge every time one tries to build a counterexample. The question is then whether this “enemy” is stupid enough to be outwitted by a sufficiently clever counterexample, or is powerful enough to block all such attempts. Identifying this enemy precisely is usually the key to resolving the conjecture (or transforming the conjecture into a stronger and better conjecture). ☆ Conjecture generalises to a broader conjecture that enjoys support of the types listed above. The twin prime conjecture, by itself, is difficult to support on its own; but when it comes with an asymptotic that one can then verify numerically to high accuracy and is a consequence of the much more powerful prime tuples conjecture (and more generally, the pseudorandomness heuristic for the primes) which is supported both because of its high falsifiability and also its nature as an optimal-looking converse (the only structure to the primes are the “obvious” structures), it becomes much more convincing. Another textbook example is the Poincaré conjecture, which became much more convincing after being interpreted as a special case of geometrisation (which had a lot of support, eg. the two-dimensional analogue, Haken manifolds, lots of falsifiable predictions, etc.). It can be fun (though a little risky, reputation-wise) to debate how strong various pieces of evidence really are, but one soon reaches a point of diminishing returns, as often we are limited by our own ignorance, lack of imagination, or cognitive biases. But we are at least reasonably able to perform relative comparisons of the strength of evidence of two conjectures in the same topic (I guess complexity theory is full of instances of this…). 28. pg190–191 of Fascinating Mathematical People, edited by Albers 2011[13ya]: Guy: If I do any mathematics at all I think I do it in my sleep. MP: Do you think a lot of mathematicians work that way? Guy: I do. Yes. The human brain is a remarkable thing, and we are a long way from understanding how it works. For most mathematical problems, immediate thought and pencil and paper—the usual things one associates with solving mathematical problems—are just totally inadequate. You need to understand the problem, make a few symbols on paper and look at them. Most of us, as opposed to Erdős who would probably give an answer to a problem almost immediately, would then probably have to go off to bed, and, if we’re lucky, when we wake up in the morning, we would already have some insight into the problem. On those rare occasions when I have such insight, I quite often don’t know that I have it, but when I come to work on the problem again, to put pencil to paper, somehow the ideas just seem to click together, and the thing goes through. It is clear to me that my brain must have gone on, in an almost combinatorial way, checking the cases or doing an enormous number of fairly trivial arithmetical computations. It seems to know the way to go. I first noticed this with chess endgames, which are indeed finite combinatorial problems. The first indication that I was interested in combinatorics—I didn’t know I had the interest, and I didn’t even know there was such a subject as combinatorics—was that I used to compose chess endgames. I would sit up late into the night trying to analyze a position. Eventually I would sink into slumber and wake up in the morning to realize that if I had only moved the pawns over one file the whole thing would have gone through clearly. My brain must have been checking over this finite but moderately large number of possibilities during the night. I think a lot of mathematicians must work that way. MP: Have you talked to any other mathematicians about that? Guy: No. But in Jacques Hadamard’s book on invention in the mathematical field, he quotes some examples there where it is fairly clear that people do that kind of thing. There was someone earlier this week who was talking about Jean-Paul Serre. He said that if you ask Serre a question he either gives you the answer immediately, or, if he hesitates, and you push him in any way, he will say, “How can I think about the question when I don’t know the answer?” I thought that was a lovely remark. At a much lower level, one should think, “What shape should the answer be?” Then your mind can start checking whether you’re right and how to find some logical sequence to get you where you want to go. 29. January 14, 1974[50ya], in “Conversations with Gian-Carlo Rota”; as quoted on pg262 of Turing’s Cathedral (2012[12ya]) by George Dyson: Once in my life I had a mathematical dream which proved correct. I was twenty years old. I thought, my God, this is wonderful, I won’t have to work, it will all come in dreams! But it never happened again. Once after I had spent several days trying to prove a topology theorem, I dreamed about it and woke up with as counterexample. In the dream it just constructed itself, and I could see it. I didn’t have a fever then, though. Later one of my teachers, an old Polish woman, explained her experience. She kept a notebook by her bed so she could write down any insights she got in her sleep. She woke up in the night with a wonderful proof, and wrote it down, and in the morning when she looked at it it was all garbage. “You cannot do math in your sleep. You will have to 31. “Higher algebraic K-theory of schemes and of derived categories”, Thomason & Trobaugh 1990[34ya]: The first author must state that his coauthor and close friend, Tom Trobaugh, quite intelligent, singularly original, and inordinately generous, killed himself consequent to endogenous depression. 94 days later, in my dream, Tom’s simulacrum remarked, “The direct limit characterization of perfect complexes shows that they extend, just as one extends a coherent sheaf.” Awaking with a start, I knew this idea had to be wrong, since some perfect complexes have a non-vanishing K[0] obstruction to extension. I had worked on this problem for 3 years, and saw this approach to be hopeless. But Tom’s simulacrum had been so insistent, I knew he wouldn’t let me sleep undisturbed until I had worked out the argument and could point to the gap. This work quickly led to the key results of this paper. To Tom, I could have explained why he must be listed as a coauthor. 32. Jacques Hadamard, An Essay on the Psychology of Invention in the Mathematical Field (1945[79ya]), pg27 Let us come to mathematicians. One of them, Maillet, started a first inquiry as to their methods of work. One famous question, in particular, was already raised by him that of the “mathematical dream”, it having been suggested often that the solution of problems that have defied investigation may appear in dreams. Though not asserting the absolute non-existence of “mathematical dreams”, Maillet’s inquiry shows that they cannot be considered as having a serious importance. Only one remarkable observation is reported by the prominent American mathematician, Leonard Eugene Dickson, who can positively assert its accuracy…Except for that very curious case, most of the 69 correspondents who answered Maillet on that question never experienced any mathematical dream (I never did) or, in that line, dreamed of wholly absurd things, or were unable to state precisely the question they happened to dream of. 5 dreamed of quite naive arguments. There is one more positive answer; but it is difficult to take account of it, as its author remains anonymous. 33. From his 1993[31ya] “How to Write a Proof”: Anecdotal evidence suggests that as many as a third of all papers published in mathematical journals contain mistakes—not just minor errors, but incorrect theorems and proofs…My information about mathematicians’ errors and embarrassment comes mainly from George Bergman. 34. 1993 “How to Write a Proof”: Some twenty years ago, I decided to write a proof of the Schroeder-Bernstein theorem for an introductory mathematics class. The simplest proof I could find was in Kelley’s classic general topology text [4, page 28]. Since Kelley was writing for a more sophisticated audience, I had to add a great deal of explanation to his half-page proof. I had written five pages when I realized that Kelley’s proof was wrong. Recently, I wanted to illustrate a lecture on my proof style with a convincing incorrect proof, so I turned to Kelley. I could find nothing wrong with his proof; it seemed obviously correct! Reading and rereading the proof convinced me that either my memory had failed, or else I was very stupid twenty years ago. Still, Kelley’s proof was short and would serve as a nice example, so I started rewriting it as a structured proof. Within minutes, I rediscovered the error. My interest in proofs stems from writing correctness proofs of algorithms. These proofs are seldom deep, but usually have considerable detail. Structured proofs provided a way of coping with this detail. The style was first applied to proofs of ordinary theorems in a paper I wrote with Martin Abadi [2]. He had already written conventional proofs|proofs that were good enough to convince us and, presumably, the referees. Rewriting the proofs in a structured style, we discovered that almost every one had serious mistakes, though the theorems were correct. Any hope that incorrect proofs might not lead to incorrect theorems was destroyed in our next collaboration [1]. Time and again, we would make a conjecture and write a proof sketch on the blackboard—a sketch that could easily have been turned into a convincing conventional proof—only to discover, by trying to write a structured proof, that the conjecture was false. Since then, I have never believed a result without a careful, structured proof. My skepticism has helped avoid numerous errors. “How to Write a 21^st Century Proof”, Lamport 2011[13ya]: My earlier paper on structured proofs described how effective they are at catching errors. It recounted how only by writing such a proof was I able to re-discover an error in a proof of the Schroeder-Bernstein theorem in a well-known topology text [2, page 28]. I recently received email from a mathematician saying that he had tried unsuccessfully to find that error by writing a structured proof. I asked him to send me his proof, and he responded: I tried typing up the proof that I’d hand-written, and in the process, I think I’ve found the fundamental error. . . I now really begin to understand what you mean about the power of this method, even if it did take me hours to get to this point! It is instructive that, to find the error, he had to re-write his proof to be read by someone else. Eliminating errors requires care. 35. “How Did Software Get So Reliable Without Proof?”, C.A.R. Hoare 1996[28ya]: Twenty years ago it was reasonable to predict that the size and ambition of software products would be severely limited by the unreliability of their component programs. Crude estimates suggest that professionally written programs delivered to the customer can contain between one and ten independently correctable errors per thousand lines of code; and any software error in principle can have spectacular effect (or worse: a subtly misleading effect) on the behavior of the entire system. Dire warnings have been issued..The arguments were sufficiently persuasive to trigger a significant research effort devoted to the problem of program correctness. A proportion of this research was based on the ideal of certainty achieved by mathematical proof. 36. Hoare 1996[28ya]: Fortunately, the problem of program correctness has turned out to be far less serious than predicted. A recent analysis by Mackenzie has shown that of several thousand deaths so far reliably attributed to dependence on computers, only ten or so can be explained by errors in the software: most of these were due to a couple of instances of incorrect dosage calculations in the treatment of cancer by radiation. Similarly predictions of collapse of software due to size have been falsified by continuous operation of real-time software systems now measured in tens of millions of lines of code, and subjected to thousands of updates per year…And aircraft, both civil and military, are now flying with the aid of software measured in millions of lines—though not all of it is safety-critical. Compilers and operating systems of a similar size now number their satisfied customers in millions. So the questions arise: why have twenty years of pessimistic predictions been falsified? Was it due to successful application of the results of the research which was motivated by the predictions? How could that be, when clearly little software has ever has been subjected to the rigours of formal proof? 37. Hoare 1996[28ya]: Success in the use of mathematics for specification, design and code reviews does not require strict formalisation of all the proofs. Informal reasoning among those who are fluent in the idioms of mathematics is extremely efficient, and remarkably reliable. It is not immune from failure; for example simple misprints can be surprisingly hard to detect by eye. Fortunately, these are exactly the kind of error that can be removed by early tests. More formal calculation can be reserved for the most crucial issues, such as interrupts and recovery procedures, where bugs would be most dangerous, expensive, and most difficult to diagnose by tests…Many more tests should be designed than there will ever be time to conduct; they should be generated as directly as possible from the specification, preferably automatically by computer program. Random selection at the last minute will protect against the danger that under pressure of time the program will be adapted to pass the tests rather than meeting the rest of its specification. There is some evidence that early attention to a comprehensive and rigorous test strategy can improve reliability of a delivered product, even when at the last minute there was no time to conduct the tests before delivery! 38. The missing steps may be quite difficult to fully prove, though; Nathanson 2009[15ya]: There is a lovely but probably apocryphal anecdote about Norbert Wiener. Teaching a class at MIT, he wrote something on the blackboard and said it was ‘obvious.’ One student had the temerity to ask for a proof. Weiner started pacing back and forth, staring at what he had written on the board and saying nothing. Finally, he left the room, walked to his office, closed the door, and worked. After a long absence he returned to the classroom. ‘It is obvious’, he told the class, and continued his lecture. 39. What conditions count as full scrutiny by the math community may not be too clear; Nathanson 2009[15ya] trenchantly mocks math talks: Social pressure often hides mistakes in proofs. In a seminar lecture, for example, when a mathematician is proving a theorem, it is technically possible to interrupt the speaker in order to ask for more explanation of the argument. Sometimes the details will be forthcoming. Other times the response will be that it’s “obvious” or “clear” or “follows easily from previous results.” Occasionally speakers respond to a question from the audience with a look that conveys the message that the questioner is an idiot. That’s why most mathematicians sit quietly through seminars, understanding very little after the introductory remarks, and applauding politely at the end of a mostly wasted hour. 40. “Why Did Lagrange ‘Prove’ the Parallel Postulate?”, Grabiner 2009[15ya]: It is true that Lagrange never did publish it, so he must have realized there was something wrong. In another version of the story, told by Jean-Baptiste Biot, who claims to have been there (though the minutes do not list his name), everybody there could see that something was wrong, so Lagrange’s talk was followed by a moment of complete silence [2, p. 84]. Still, Lagrange kept the manuscript with his papers for posterity to read. Why work on it at all? The historical focus on the fifth postulate came because it felt more like the kind of thing that gets proved. It is not self-evident, it requires a diagram even to explain, so it might have seemed more as though it should be a theorem. In any case, there is a tradition of attempted proofs throughout the Greek and then Islamic and then eighteenth-century mathematical worlds. Lagrange follows many eighteenth-century mathematicians in seeing the lack of a proof of the fifth postulate as a serious defect in Euclid’s Elements. But Lagrange’s criticism of the postulate in his manuscript is unusual. He said that the assumptions of geometry should be demonstrable “just by the principle of contradiction”-the same way, he said, that we know the axiom that the whole is greater than the part [32, p. 30R]. The theory of parallels rests on something that is not self-evident, he believed, and he wanted to do something about this. What was the strange and alien to the modern mind approach that Lagrange used? Recall that Lagrange said in this manuscript that axioms should follow from the principle of contradiction. But, he added, besides the principle of contradiction, “There is another principle equally self-evident,” and that is Leibniz’s principle of sufficient reason. That is: nothing is true “unless there is a sufficient reason why it should be so and not otherwise” [42, p. 31; italics added]. This, said Lagrange, gives as solid a basis for mathematical proof as does the principle of contradiction [32, p. 30V]. But is it legitimate to use the principle of sufficient reason in mathematics? Lagrange said that we are justified in doing this, because it has already been done. For example, Archimedes used it to establish that equal weights at equal distances from the fulcrum of a lever balance. Lagrange added that we also use it to show that three equal forces acting on the same point along lines separated by a third of the circumference of a circle are in equilibrium [32, pp. 31R-31V]…The modern reader may object that Lagrange’s symmetry arguments are, like the uniqueness of parallels, equivalent to Euclid’s postulate. But the logical correctness, or lack thereof, of Lagrange’s proof is not the point. (In this manuscript, by the way, Lagrange went on to give an analogous proof—also by the principle of sufficient reason-that between two points there is just one straight line, because if there were a second straight line on one side of the first, we could then draw a third straight line on the other side, and so on [32, pp. 34R-34V]. Lagrange, then, clearly liked this sort of argument.) …Why did philosophers conclude that space had to be infinite, homogeneous, and the same in all directions? Effectively, because of the principle of sufficient reason. For instance, Giordano Bruno in 1600[424ya] argued that the universe must be infinite because there is no reason to stop at any point; the existence of an infinity of worlds is no less reasonable than the existence of a finite number of them. Descartes used similar reasoning in his Principles of Philosophy: “We recognize that this world. . . has no limits in its extension. . . . Wherever we imagine such limits, we . . . imagine beyond them some indefinitely extended space” [28, p. 104]. Similar arguments were used by other seventeenth-century authors, including Newton. Descartes identified space and the extension of matter, so geometry was, for him, about real physical space. But geometric space, for Descartes, had to be Euclidean…Descartes, some 50 years before Newton published his first law of motion, was a co-discoverer of what we call linear inertia: that in the absence of external influences a moving body goes in a straight line at a constant speed. Descartes called this the first law of nature, and for him, this law follows from what we now recognize as the principle of sufficient reason. Descartes said, “Nor is there any reason to think that, if [a part of matter] moves. . . and is not impeded by anything, it should ever by itself cease to move with the same force” [30, p. 75]…Leibniz, by contrast, did not believe in absolute space. He not only said that spatial relations were just the relations between bodies, he used the principle of sufficient reason to show this. If there were absolute space, there would have to be a reason to explain why two objects would be related in one way if East is in one direction and West in the opposite direction, and related in another way if East and West were reversed [24, p. 147]. Surely, said Leibniz, the relation between two objects is just one thing! But Leibniz did use arguments about symmetry and sufficient reason-sufficient reason was his principle, after all. Thus, although Descartes and Leibniz did not believe in empty absolute space and Newton did, they all agreed that what I am calling the Euclidean properties of space are essential to physics. …In his 1748[276ya] essay “Reflections on Space and Time”, Euler argued that space must be real; it cannot be just the relations between bodies as the Leibnizians claim [10]. This is because of the principles of mechanics—that is, Newton’s first and second laws. These laws are beyond doubt, because of the “marvelous” agreement they have with the observed motions of bodies. The inertia of a single body, Euler said, cannot possibly depend on the behavior of other bodies. The conservation of uniform motion in the same direction makes sense, he said, only if measured with respect to immovable space, not to various other bodies. And space is not in our minds, said Euler; how can physics-real physics-depend on something in our minds?…in his Critique of Pure Reason of 1781[243ya], Kant placed space in the mind nonetheless. We order our perceptions in space, but space itself is in the mind, an intuition of the intellect. Nevertheless, Kant’s space turned out to be Euclidean too. Kant argued that we need the intuition of space to prove theorems in geometry. This is because it is in space that we make the constructions necessary to prove theorems. And what theorem did Kant use as an example? The sum of the angles of a triangle is equal to two right angles, a result whose proof requires the truth of the parallel postulate [26, “Of space,” p. 423]…Lagrange himself is supposed to have said that spherical trigonometry does not need Euclid’s parallel postulate [4, pp. 52–53]. But the surface of a sphere, in the eighteenth-century view, is not non-Euclidean; it exists in 3-dimensional Euclidean space [20, p. 71]. The example of the sphere helps us see that the eighteenth-century discussion of the parallel postulate’s relationship to the other postulates is not really about what is logically possible, but about what is true of real space. The final step: Johann Heinrich Lambert was one of the mathematicians who worked on the problem of Postulate 5. Lambert explicitly recognized that he had not been able to prove it, and considered that it might always have to remain a postulate. He even briefly suggested a possible geometry on a sphere with an imaginary radius. But Lambert also observed that the parallel postulate is related to the law of the lever [20, p. 75]. He said that a lever with weightless arms and with equal weights at equal distances is balanced by a force in the opposite direction at the center equal to the sum of the weights, and that all these forces are parallel. So either we are using the parallel postulate, or perhaps, Lambert thought, some day we could use this physical result to prove the parallel postulate…These men did not want to do mechanics, as, say, Newton had done. They wanted to show not only that the world was this way, but that it necessarily had to be. A modern philosophical critic, Helmut Pulte, has said that Lagrange’s attempt to “reduce” mechanics to analysis strikes us today as “a misplaced endeavour to mathematize. . . an empirical science, and thus to endow it with infallibility” [39, p. 220]. Lagrange would have responded, “Right! That’s just exactly what we are all doing.” Much of CS theory would disappear. In my own research some of Ken’s and my “best” results would survive, but many would be destroyed. The Karp-Lipton Theorem is gone in this world. Ditto all “dichotomy” results between P and NP-complete, and for P = #P, Jin-Yi’s similar work. Many barrier results, such as oracle theorems and natural proofs, lose their main motivation, while much fine structure in hardness-versus-randomness tradeoffs would be blown up. The PCP Theorem and all the related work is gone. Modern cryptography could survive if the algorithm were galactic, but otherwise would be in trouble. I am currently teaching Complexity Theory at Tech using the textbook by Sanjeev Arora and Boaz Barak…Most of the 573 pages of Arora-Barak would be gone: ☆ Delete all of chapter 3 on NP. ☆ Delete all of chapter 5 on the polynomial hierarchy. ☆ Delete most of chapter 6 on circuits. ☆ Delete all of chapter 7 on probabilistic computation. ☆ Mark as dangerous chapter 9 on cryptography. ☆ Delete most of chapter 10 on quantum computation—who would care about Shor’s algorithm then? ☆ Delete all of chapter 11 on the PCP theorem. I will stop here. Most of the initial part of the book is gone. The same for much of Homer-Selman, and basically all of the “Reducibility and Completeness” CRC chapter.
{"url":"https://gwern.net/math-error","timestamp":"2024-11-12T07:26:21Z","content_type":"text/html","content_length":"291517","record_id":"<urn:uuid:a5b75d2e-73e3-4f3f-a98b-65afbf006686>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00566.warc.gz"}
Measuring Flexible Prices, Flexible Output and Marginal Costs using Survey Data - ESCoE By Michael J. Mahony The aim of our new ESCoE paper “Measuring Flexible Prices, Flexible Output and Marginal Costs using Survey Data” is to exploit survey microdata so as to provide new measures of flexible prices, flexible output and marginal costs for the UK. The methods we provide in this paper can straightforwardly be applied to other countries, given the right data. We are grateful to the Confederation of British Industry (CBI) who kindly provided their survey microdata for the Industrial Trends Survey (ITS), Service Sector Survey (SSS) and Distributive Trades Survey (DTS). These surveys cover the manufacturing, services and distributive trades industrial sectors – which constitute more than 90% of UK private sector activity. For our purposes, the key strength of these surveys is that they directly ask firms if their selling price, level of output or average costs have changed over the previous quarter. In terms of prices, from the survey microdata we are able to observe the proportion of firms which change their selling price each quarter (and the corresponding proportion of firms that don’t). This measure is a key variable in the workhorse of modern monetary economics, the New Keynesian Model, and is often assumed fixed (i.e. doesn’t change from quarter to quarter). Our work shows this assumption is not justified – in fact there is substantial variation from quarter to quarter across the three industrial sectors we examine (as well as the disaggregate primary and secondary manufacturing). As an example see Figure 1, which graphs the proportion of firms in the manufacturing sector that changed their prices in the previous quarter (denoted λ^p[t]) alongside the standard assumed fixed value of 0.25 (denoted λ^p). In each sector examined in the paper, the proportion of firms adjusting their price each quarter typically exceeds 0.25. This is noticeably true in the distributive trades sector. Figure 1: The Proportion of Firms Adjusting Prices in the Previous Quarter (Manufacturing Sector) Note: Using notation from the paper, λ^p[t] is the proportion of firms adjusting prices in the previous quarter and λp is an assumed fixed proportion of firms adjusting prices (set at 0.25). While this insight is informative in its own right, its main significance is in allowing an accurate construction of a flexible price level for each industrial sector. In the paper, we derive our measure of flexible prices via a straightforward decomposition (using first principles) of the aggregate price level. Given sticky prices, in any period the aggregate price level is a weighted average of the flexible price (chosen by firms changing their prices) and the previous period’s price level (for those firms which do not change prices) – where the weights are determined by the proportion of firms adjusting and not adjusting their price each period (respectively). Given that we can directly and accurately measure the proportion of firms changing and not changing their prices, we can thus compute the flexible price index for each industrial sector. It is also worth noting that our straightforward decomposition is consistent with the microfoundations of price-setting firms in a monopolistically competitive market. In each industrial sector our derived flexible prices indices are more volatile than the corresponding actual price index. In fact, the flexible price index amplifies the underlying volatility in the price level. This can be seen in Figure 2, which for the manufacturing sector graphs the flexible price index (denoted q[t]) and the actual price level (denoted p[t]). In the paper we also present and discuss some alternative flexible price iterations – such as allowing firms to index prices in the straightforward decomposition, using the Atlanta Fed methodology and assuming a fixed proportion (25%) of firms adjust prices each period. Figure 2: The Flexible Price Index (Manufacturing Sector) Note: Using notation from the paper, q[t] is the flexible price index and p[t] is the observed price level. The decomposition methodology applied to prices is also applied to output in the paper. Again, the ability to directly measure the proportion of firms which change their output each quarter is used to derive a flexible output index. Once more, the flexible output index amplifies the volatility of the output series. However, there is an additional benefit to being able to directly measure the proportion of firms changing their output – given data on changes in average costs (which the CBI dataset has), we can construct a direct measure of marginal costs. Making no assumption regarding functional form average costs equal the sum of fixed costs and variable costs divided by output. We show that focusing on firms where output remains unchanged, the change in average costs is proportionate to changes in variable costs – which given output remains unchanged represent marginal costs. Intuitively, if firms are facing higher (lower) average cost without changing their output quantities, then they must be facing increased (decreased) marginal costs. These new direct measures of marginal cost closely track inflation with strong positive correlations being observed. As an example, see Figure 3 which (for the manufacturing sector) depicts marginal costs (denoted φ[t] ) and inflation (denoted π[t]). Figure 3: Marginal Costs and Inflation (Manufacturing Sector) Note: Using notation from the paper, φ[t] is marginal cost and π[t] is inflation. Ultimately the key contribution of this paper is that important economic variables which have typically gone unmeasured have now been provided for a large section of the UK economy. Read the full ESCoE discussion paper here. ESCoE blogs are published to further debate. Any views expressed are solely those of the author(s) and so cannot be taken to represent those of the ESCoE, its partner institutions or the Office for National Statistics.
{"url":"https://www.escoe.ac.uk/measuring-flexible-prices-flexible-output-and-marginal-costs-using-survey-data/","timestamp":"2024-11-06T23:30:50Z","content_type":"text/html","content_length":"170177","record_id":"<urn:uuid:926b0cc7-5a33-4275-993d-9c11334e8f30>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00616.warc.gz"}
How to calculate future-value interest factor of an annuity Compound Interest: The future value (FV) of an investment of present value Future Value (FV) of an Annuity Components: Ler where R = payment, r = rate of interest, Compound Interest's Factors; Compound Interest & Effective Rate; Mortgage example, with your own case-information, and then click one the Calculate. Future Value Factor for an Ordinary Annuity. (Interest rate = r, Number of periods = n) n \ r. 1%. 2%. 3%. 4%. 5%. 6%. 7%. 8%. 9%. 10%. 11%. 12%. 13%. 14%. HP 10b Calculator - Calculating the Present and Future Values of an Annuity that Increases at a Constant Rate at Equal Calculates a factor interest rate. Normal annuity is no different, because all we have to do is calculate PV of FV for each of These are called Present Value Interest Factors Annuity, or PVIFA. Present Value. Value today of a future cash flow. Discount Rate. Interest rate used to compute present values of future cash flows. Discount Factor. Present value The first column (n) refers to the number of recurring identical payments (or periods) in an annuity. The other columns contain the factors for the interest rate ( i) This is also called discounting. The present value of a future cash-flow represents the amount of money today, which, if invested at a particular interest rate, will What is Future Value of An Annuity? Using the above example, if you were to invest each of the $100 annual payments at a compounding interest rate (earning Enter the interest rate, the number of periods and a single cash flow value. Press the "Calculate" button to calculate the Future Value Annuity Factor (FVAF). Present value (also known as discounting) determines the current worth of cash value of an ordinary annuity table provides the necessary factor to determine HP 10b Calculator - Calculating the Present and Future Values of an Annuity that Increases at a Constant Rate at Equal Calculates a factor interest rate. Normal annuity is no different, because all we have to do is calculate PV of FV for each of These are called Present Value Interest Factors Annuity, or PVIFA. Present Value. Value today of a future cash flow. Discount Rate. Interest rate used to compute present values of future cash flows. Discount Factor. Present value The first column (n) refers to the number of recurring identical payments (or periods) in an annuity. The other columns contain the factors for the interest rate ( i) FVIFA is the abbreviation of the future value interest factor of an annuity. It is a factor that can be used to calculate the future value of a series of annuities. • Calculate Future Value Annuity Factor (FVAF) Enter the interest rate, the number of periods and a single cash flow value. Press the "Calculate" button to calculate the Future Value Annuity Factor The future value interest factor for an annuity is used in this calculation: n %, n FVIFAi FV PMT 4-18 Amortizing a loan into equal annual payments involves Note that, all other factors being equal, the future value of an annuity due is This equation is valid for a perpetuity with level payments, positive interest rate r. Enter the interest rate, the number of periods and a single cash flow value. Press the "Calculate" button to calculate the Future Value Annuity Factor (FVAF). Following is the formula for finding future value of an ordinary annuity: FVA = P * ((1 + i) n - 1) / i) where, FVA = Future value P = Periodic payment amount n = Number of payments i = Periodic interest rate per payment period, See periodic interest calculator for conversion of nominal annual rates to periodic rates. Future Value of Annuity Calculator is an online investment returns assessment tool to determine the time value of money. Annuity value, interest rate and time period are the key factors to figure out the future value of an annuity. The term future value of annuity is used in investment plans to describe an amount that will not exist until the This future value of annuity calculator estimates the value (FV) of a series of fixed future annuity payments at a specific interest rate and for a no. of periods the interest is compounded (either ordinary or due annuity). Future Value Annuity Calculator to Calculate Future Value of Ordinary or Annuity Due This online Future Value Annuity Calculator will calculate how much a series of equal cash flows will be worth after a specified number years, at a specified compounding interest rate. Calculate the future value of an annuity due, ordinary annuity and growing annuities with optional compounding and payment frequency. Annuity formulas and derivations for future value based on FV = (PMT/i) [(1+i)^ n - 1](1+iT) including continuous compounding • PVAF - Find Corresponding Interest Rate For a Given Time Period And PVAF Value. Enter the time period value and the PVAF Value below. Press the "Calculate" button to find the corresponding interest rate associated with this Present Value Annuity Factor (PVAF). This future value of annuity calculator estimates the value (FV) of a series of fixed future annuity payments at a specific interest rate and for a no. of periods the interest is compounded (either ordinary or due annuity). Future Value Annuity Calculator to Calculate Future Value of Ordinary or Annuity Due This online Future Value Annuity Calculator will calculate how much a series of equal cash flows will be worth after a specified number years, at a specified compounding interest rate. Calculate the future value of an annuity due, ordinary annuity and growing annuities with optional compounding and payment frequency. Annuity formulas and derivations for future value based on FV = (PMT/i) [(1+i)^n - 1](1+iT) including continuous compounding • PVAF - Find Corresponding Interest Rate For a Given Time Period And PVAF Value. Enter the time period value and the PVAF Value below. Press the "Calculate" button to find the corresponding interest rate associated with this Present Value Annuity Factor (PVAF).
{"url":"https://cryptobcmb.netlify.app/turkington14709co/how-to-calculate-future-value-interest-factor-of-an-annuity-72.html","timestamp":"2024-11-06T18:04:14Z","content_type":"text/html","content_length":"33053","record_id":"<urn:uuid:3d71a12a-4f17-4c1b-8018-47e72ec5e505>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00261.warc.gz"}
Geometrical Concepts and Properties MCQs [PDF] Quiz Questions Answers | Geometrical Concepts and Properties MCQ App Download & e-Book: Test 1 Class 6 Math MCQs - Chapter 8 Geometrical Concepts and Properties Multiple Choice Questions (MCQs) PDF Download - 1 The Geometrical concepts and properties Multiple Choice Questions (MCQs) with Answers PDF (geometrical concepts and properties MCQs PDF e-Book) download Ch. 8-1 to study Grade 6 Math Course. Learn Geometrical Concepts and Properties Quiz Questions and Answers for online certificate programs. The Geometrical Concepts and Properties MCQs App Download: Free learning app for supplementary angles, types of angles, line rays and segments, cartesian plane test prep for online certificate courses. The MCQ: If two angles are said to be supplementary angles and one of the angle is of 122° then the other angle is of; "Geometrical Concepts and Properties" App Download (Free) with answers: 58°; 35°; 60°; 32°; for online certificate programs. Solve Supplementary Angles MCQ Questions, download Google eBook (Free Sample) for online certifications. Geometrical Concepts and Properties MCQs with Answers PDF Download: Quiz 1 MCQ 1: If two angles are said to be supplementary angles and one of the angle is of 122° then the other angle is of 1. 35° 2. 58° 3. 60° 4. 32° MCQ 2: The angle which is less than 360° and larger than 180° is classified as 1. acute angle 2. obtuse angle 3. right angle 4. reflex angle MCQ 3: The angle which is equal to 90° is classified as 1. acute angle 2. obtuse angle 3. right angle 4. reflex angle MCQ 4: If the line segment is extended in two directions indefinitely from each of the two points then it is classified as 1. intersecting line 2. plane 3. line 4. ray MCQ 5: The flat surface in which two points are joined by using straight line is classified as 1. line 2. ray 3. intersecting line 4. plane Geometrical Concepts and Properties Learning App: Free Download Android & iOS The App: Geometrical Concepts and Properties MCQs App to learn Geometrical Concepts and Properties Textbook, 6th Grade Math MCQ App, and 8th Grade Math MCQs App. The "Geometrical Concepts and Properties" App to free download iOS & Android Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with
{"url":"https://mcqlearn.com/grade6/math/geometrical-concepts-and-properties.php","timestamp":"2024-11-08T17:53:07Z","content_type":"text/html","content_length":"73161","record_id":"<urn:uuid:3d7e57e9-f48c-4e39-a60c-a4db96cf1f1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00250.warc.gz"}
What’s new? What’s new?¶ This page gives some of the key improvements in each galpy version. See the HISTORY.txt file in the galpy source for full details on what is new and different in each version. Version 1.10 is a largely incremental update to version 1.9, with a few new features, bug fixes, and small improvememts to the documentation. The major new additions are: • Added KuzminLikeWrapperPotential, a potential wrapper that allows one to make a Kuzmin-like or Miyamoto-Nagai-like potential out of any spherical or axisymmetric potential (evaluated in the plane, i.e., treated as a spherical potential). Kuzmin-like potentials are obtained by replacing the spherical radius r with \(\sqrt{R^2 + (a + |z|^2)}\), while Miyamoto-Nagai-like potentials are obtained by replacing the spherical radius with \(\sqrt{R^2 + (a + \sqrt{z^2 + b^2})^2}\). The standard KuzminDiskPotential and MiyamotoNagaiPotential are obtained by applying this procedure to a point-mass potential and the Kuzmin/Miyamoto-Nagai-like potentials generalize this to any spherical potential. • Allow vector inputs of solar parameters to Orbit initialization: ro, zo, vo, and solarmotion. Useful when sampling over the uncertainty in the solar parameters. • Increased support for using OpenMP with clang and, in particular, added OpenMP support in the released Mac wheels. • Converted all docstrings to numpy-style format with the help of GitHub Copilot. Version 1.9 contains two major new additions since version 1.8, a bunch of small updates and bug fixes, and a few deprecations. The major new additions are: • Specialized support for calculating and displaying surfaces of section for orbits. Surfaces of section for 3D and 2D potentials can now be computed using a dedicated integration technique Orbit.SOS or using a brute-force technique Orbit.bruteSOS (for orbits for which te dedicated technique fails, e.g., orbits in a rotating frame). There is also support for directly plotting surfaces of section in Orbit.plotSOS and Orbit.plotBruteSOS. See Surfaces of section for more info. • There is now general support for action-angle and reverse action-angle transformations for 1D potentials using the galpy.actionAngle.actionAngleVertical and galpy.actionAngle.actionAngleVerticalInverse classes. The inverse transformation is computing using a robust variation of the torus-mapper. Important deprecations are: • galpy.util.bovy_coords, galpy.util.bovy_plot, and galpy.util.bovy_conversion have now been removed in favor of their bovy_ lacking versions. • phiforce and associated Potential methods and functions have been removed in favor of phitorque and associated methods and functions. Other user-facing improvements and additions are: • Potential classes, methods, and functions: □ Made the potential input explicitly positional-only for all galpy.potential functions to avoid errors when specifying it as a keyword argument. □ Added general support for DissipativeForce instances in 2D. □ Implemented NonInertialFrameForce in 2D. □ Allow potentials’ density and surface density to be plotted on physical axes. • New and improved Orbit methods: □ Added Orbit.animate3d to display a 3D animation of an integrated orbit with an optional Milky-Way representation at the origin when plotting x,y,z. □ Improved the performance of Orbit.animate performance by using webgl, some UI tweaks. Also fixed using Orbit.animate in jupyterlab and retrolab. • Improvements to spherical distribution functions: □ Made it possible to use an interpSphericalPotential as the potential in spherical distribution functions. □ Added a method, dMdE, to calculate the differential energy distribution of spherical distribution functions. Version 1.8 contains two big new features and a variety of smaller improvements described below. In addition to this, version 1.8 is also the first version to fully drop Python 2.7 support (and, thus, all Python 2 support; note that Python 2 was already almost not supported before). Version 1.8 also represents the start of a new release cycle, in which we will attempt to release a new major version 1.x every year around July 1 and have two minor version releases at roughly four-month intervals in between (so around November 1 and March 1). Major releases will include this overview of what’s new since the last major version release. Major new features: • galpy now allows for a very general set of fictitious forces that arise when working in a non-inertial reference frame through the new potential class NonInertialFrameForce. The main driver for this new addition is to include the effect of the Milky Way’s barycenter acceleration due to the effect of the Large Magellanic Cloud on the orbits of stars, satellite galaxies, and star clusters in the Milky Way. How this can be done exactly is explained in the Example: Including the Milky Way center’s barycentric acceleration due to the Large Magellanic Cloud in orbit integrations section. But a much more general set of non-inertial reference frames are supported: any combination of barycenter acceleration and arbitrary rotations. See Orbit integration in non-inertial frames for some more info. • A particle-spray technique for generating mock stellar streams has been added as galpy.df.streamspraydf. This roughly follows the Fardal et al. (2015) implementation, with some notable additions (e.g., the ability to generate a stream around the center of an orbiting satellite). The full galpy implementation is described in Qian et al. (2022). Other user-facing improvements and additions are • Potential classes, methods, and functions: □ Renamed phiforce –> phitorque everywhere (including potential.evaluatephiforces and potential.evaluateplanarphiforces), such that the method’s name actually reflect what it returns (a torque, not a force). phiforce will be fully removed in version 1.9 and may later be re-used for the actual phi component of the force, so switch to the new name now. □ Added SCFPotential.from_density to directly initialize an SCFPotential based on a density function. Allows for full correct and consistent handling of Quantity inputs and outputs. □ Added TimeDependentAmplitudeWrapperPotential for adding arbitrary time-dependence to the amplitude of any Potential/Force. □ Added NullPotential, a Potential with a constant value (useful, e.g.. to adjust the zero point of a potential, or for testing code in the absence of forces). □ Added Potential methods/functions rE and LcE to compute the radius and angular momentum of an orbit with energy E. Also added these as Orbit methods for efficient calculation for collections of orbits. □ Added the offset= keyword to RotateAndTiltWrapperPotential, which allows a Potential/Force instance to also be offset from (0,0,0) in addition to being rotated or tilted. • New and improved Orbit methods: □ Added a progress bar when integrating multiple objects in a single orbit instance (requires tqdm). □ Added rE and LcE for the efficient computation of the radius and angular momentum of an orbit with energy E (this is efficient for many orbits in a single Orbit instance; see above). □ Updated existing and added new phase-space positions for MW satellite galaxies from Pace et al. (2022). □ Updated existing and added new phase-space positions for MW globular clusters from Baumgardt et al. (2019), Vasiliev & Baumgardt (2021), and Baumgardt & Vasiliev (2021). □ Allow actions to be computed for Orbit instances with actionAngle methods that don’t compute frequencies. • Updated spherical distribution functions: □ Added necessary derivatives to allow spherical DFs to be constructed using PowerSphericalPotentialwCutoff and PlummerPotential. Finally, galpy can now also be compiled to WebAssembly using the emscripten compiler, as part of the pyodide project. This allows for galpy use in the browser without installation at near-C speeds. See Using galpy in web applications for more info. This, for example, powers the new “Try galpy” interactive session on this documentation’s home page. Version 1.7 adds many new features, mainly in the galpy.potential and galpy.df modules. The biggest new additions are: • A general framework for spherical distribution functions defined using \(f(E,L)\) models. Specifically, general solutions for (a) isotropic \(f(E)\) models, (b) \(f(E,L)\) models with constant anisotropy \(\beta\), and (c) \(f(E,L)\) models with Osipkov-Merritt-type anisotropy are implemented for any potential/density pair (not necessarily self-consistent). These distribution functions can be evaluated, sampled exactly, and any moment of the distribution function can be calculated. Documentation of this is currently available at Spherical distribution functions. Distribution functions with constant anisotropy require the JAX. • In addition to the general solution, the distribution function of a few well-known distribution functions was added, including (a) Hernquist distribution functions that are isotropic, have constant anisotropy, or have Osipkov-Merritt type anisotropy; (b) an isotropic Plummer profile; (c) the isotropic NFW profile (either using the approx. from Widrow 2000 or using an improved approximation) and the Osipkov-Merritt NFW profile (new approximate form); (d) the King model (also added as a potential as KingPotential). Other new additions include: • New or improved potentials and potential wrappers: • Other changes to Potential classes, methods, and functions: □ Functions to compute the SCF density/potential expansion coefficients based on an N-body representation of the density (scf_compute_coeffs_spherical_nbody, scf_compute_coeffs_axi_nbody, and □ An NFWPotential can now be initialized using rmax/vmax, the radius and value of the maximum circular velocity. □ Potential functions and methods to compute the zero-velocity curve: zvc and zvc_range. The latter computes the range in R over which the zero-velocity curve is defined, the former gives the positive z position on the zero-velocity curve for a given radius in this range. □ rhalf Potential function/method for computing the half-mass radius. □ tdyn Potential function/method for computing the dynamical time using the average density. □ Potential.mass now always returns the mass within a spherical shell if only one argument is given. Implemented faster versions of many mass implementations using Gauss’ theorem (including SCFPotential and DiskSCFPotential). □ Mixed azimuthal,vertical 2nd derivatives for all non-axisymmetric potentials in function evaluatephizderivs and method phizderiv. Now all second derivatives in cylindrical coordinates are □ Function/method plotSurfaceDensities/plotSurfaceDensity for plotting, you’ll never guess, the surface density of a potential. □ Re-implementation of DoubleExponentialDiskPotential using the double-exponential formula for integrating Bessel functions, resulting in a simpler, more accurate, and more stable implementation. This potential is now accurate to ~machine precision. □ Potentials are now as much as possible numerically stable at r=0 and r=inf, meaning that they can be evaluated there. Other additions and changes include: □ Added the inverse action-angle transformations for the isochrone potential (in actionAngleIsochroneInverse) and for the one-dimensional harmonic oscillator (in actionAngleHarmonicInverse). Also added the action-angle calculation for the harmonic oscillator in actionAngleHarmonic. Why yes, I have been playing around with the TorusMapper a bit! □ Renamed galpy.util.bovy_coords to galpy.util.coords, galpy.util.bovy_conversion to galpy.util.conversion, and galpy.util.bovy_plot to galpy.util.plot (but old from galpy.util import bovy_X will keep working for now). Also renamed some other internal utility modules in the same way (bovy_symplecticode, bovy_quadpack, and bovy_ars; these are not kept backwards-compatible). Trying to make the code a bit less egotistical! □ Support for Python 3.9. This version mainly consists of changes to the internal functioning of galpy; some of the new outward-facing features are: • ChandrasekharDynamicalFrictionForce is now implemented in C, leading to 100x to 1000x speed-ups for orbit integrations using dynamical friction compared to the prior pure-Python version. • New potentials: • Some notable internal changes: □ Fixed a bug in how DiskSCFPotential instances are passed to C for orbit integration that in particular affected the McMillan17 Milky-Way potential (any hole in the surface density was effectively ignored in the C code in v1.5). □ The performance of orbit animations is significantly improved. □ All main galpy C extensions are now compiled into a single shared-object library libgalpy. □ Binary wheels are now automatically built for Windows, Mac, and most major Linux distributions upon every push to the master (now main) branch and these are automatically uploaded to PyPI upon release. See the Installation Instructions for more info. Binary wheels on Windows are also built for every push on AppVeyor, see the Windows installation instructions. This version will be the last to support Python 2.7 as this version of Python is reaching end-of-life on January 1 2020. • This version’s highlight is a fully re-written implementation of galpy.orbit.Orbit such that it can now contain and manipulate multiple objects at once. galpy.orbit.Orbit can be initialized with an arbitrary shape of input objects in a variety of ways, manipulated in a manner similar to Numpy arrays, and all Orbit methods work efficiently on Orbit instances containing multiple objects. Some methods, such as orbit integration and those for fast orbital characterization are parallelized on multi-core machines. Orbit instances can contain and manipulate millions of objects simultaneously now. • Added the galpy.potentials.mwpotentials module with various Milky-Way-like potentials. Currently included are MWPotential2014, McMillan17 for the potential from McMillan (2017), models 1 through 4 from Dehnen & Binney (1998), and the three models from Irrgang et al. (2013). See this section of the API documentation for details. • Added a (JSON) list with the phase-space coordinates of known objects (mainly Milky Way globular clusters and dwarf galaxies) for easy Orbit.from_name initialization. For ease of use, Orbit.from_name also supports tab completion for known objects in this list in IPython/Jupyter. • Added galpy.potential.to_amuse to create an AMUSE representation of any galpy potential, allowing galpy potentials to be used as external gravitational fields in AMUSE N-body simulations. • New or improved potentials and potential wrappers: □ MovingObjectPotential: Re-wrote potential.MovingObjectPotential to allow general mass distributions for the moving object, implemented now as standard galpy potentials. Also added a C implementation of this potential for fast orbit integration. □ IsothermalDiskPotential: The one-dimensional potential of an isothermal self-gravitating disk (sech^2 profile). □ NumericalPotentialDerivativesMixin: a Mixin class to add numerically-computed forces and second derivatives to any Potential class, allowing new potentials to be implemented quickly by only implementing the potential itself and obtaining all forces and second derivatives numerically. □ DehnenSmoothWrapperPotential: Can now decay rather than grow a potential by setting decay=True. □ Added support to combine Potential instances or lists thereof through the addition operator. E.g., pot= pot1+pot2+pot3 to create the combined potential of the three component potentials (pot1,pot2,pot3). Each of these components can be a combined potential itself. As before, combined potentials are simply lists of potentials, so this is simply an alternative (and perhaps more intuitive) way to create these lists. □ Added support to adjust the amplitude of a Potential instance through multiplication of the instance by a number or through division by a number. E.g., pot= 2.*pot1 returns a Potential instance that is the same as pot1, except that the amplitude is twice larger. Similarly, pot= pot1/2. decreases the amplitude by a factor of two. This is useful, for example, to quickly change the mass of a potential. Only works for Potential instances, not for lists of Potential instances. • New or improved galpy.orbit.Orbit functionality and methods: □ Added support for 1D orbit integration in C. □ Added support to plot arbitrary combinations of the basic Orbit attributes by giving them as an expression (e.g., orb.plot(d2='vR*R/r+vz*z/r')); requires the numexpr package. □ Switched default Sun’s vertical height zo parameter for Orbit initialization to be the value of 20.8 pc from Bennett & Bovy (2019). □ Add Python and C implementation of Dormand-Prince 8(5,3) integrator. • Added dynamical friction as the ChandrasekharDynamicalFrictionForce class, an implementation of dynamical friction based on the classical Chandrasekhar formula (with recent tweaks from the literature to better represent the results from N-body simulations). • A general EllipsoidalPotential superclass for implementing potentials with densities that are constant on ellipsoids (functions of \(m^2 = x^2 + y^2/b^2 + z^2/c^2\)). Also implemented in C. Implementing new types of ellipsoidal potentials now only requires three simple functions to be defined: the density as a function of m, its derivative with respect to m, and its integral with respect to m^2. Makes implementing any ellipsoidal potential a breeze. See examples in the new-potentials section below. • New or improved potentials and potential wrappers: □ CorotatingRotationWrapperPotential: wrapper to make a pattern (e.g., a SpiralArmsPotential) wind up over time such that it is always corotating (see Hunt et al. (2018) for an example of □ GaussianAmplitudeWrapperPotential: wrapper to modulate the amplitude of a (list of) Potential (s) with a Gaussian. □ PerfectEllipsoidPotential: Potential of a perfect triaxial ellipsoid (de Zeeuw 1985). □ SphericalShellPotential: Potential of a thin, spherical shell. □ RingPotential: Potential of a circular ring. □ Re-implemented TwoPowerTriaxialPotential, TriaxialHernquistPotential, TriaxialJaffePotential, and TriaxialNFWPotential using the general EllipsoidalPotential class. • New Potential methods and functions: • New or improved galpy.orbit.Orbit functionality and methods: □ Orbit.from_name to initialize an Orbit instance from an object’s name. E.g., orb= Orbit.from_name('LMC'). □ Orbit initialization without arguments is now the orbit of the Sun. □ Orbits can be initialized with a SkyCoord. □ Default solarmotion= parameter is now ‘schoenrich’ for the Solar motion of Schoenrich et al. (2010). □ rguiding: Guiding-center radius. □ Lz: vertical component of the angular momentum. □ If astropy version > 3, Orbit.SkyCoord method returns a SkyCoord object that includes the velocity information and the Galactocentric frame used by the Orbit instance. • galpy.df.jeans module with tools for Jeans modeling. Currently only contains the functions sigmar and sigmalos to calculate the velocity dispersion in the radial or line-of-sight direction using the spherical Jeans equation in a given potential, density profile, and anisotropy profile (anisotropy can be radially varying). • Support for compilation on Windows with MSVC. • A fast and precise method for approximating an orbit’s eccentricity, peri- and apocenter radii, and maximum height above the midplane using the Staeckel approximation (see Mackereth & Bovy 2018). Can determine these parameters to better than a few percent accuracy in as little as 10 \(\mu\mathrm{s}\) per object, more than 1,000 times faster than through direct orbit integration. See this section of the documentation for more info. • A general method for modifying Potential classes through potential wrappers—simple classes that wrap existing potentials to modify their behavior. See this section of the documentation for examples and this section for information on how to easily define new wrappers. Example wrappers include SolidBodyRotationWrapperPotential to allow any potential to rotate as a solid body and DehnenSmoothWrapperPotential to smoothly grow any potential. See this section of the galpy.potential API page for an up-to-date list of wrappers. • New or improved potentials: • New or improved galpy.orbit.Orbit methods: □ Method to display an animation of an integrated orbit in jupyter notebooks: Orbit.animate. See this section of the documentation. □ Improved default method for fast calculation of eccentricity, zmax, rperi, rap, actions, frequencies, and angles by switching to the Staeckel approximation with automatically-estimated approximation parameters. □ Improved plotting functions: plotting of spherical radius and of arbitrary user-supplied functions of time in Orbit.plot, Orbit.plot3d, and Orbit.animate. • actionAngleStaeckel upgrades: □ actionAngleStaeckel methods now allow for different focal lengths delta for different phase-space points and for the order of the Gauss-Legendre integration to be specified (default: 10, which is good enough when using actionAngleStaeckel to compute approximate actions etc. for an axisymmetric potential). □ Added an option to the estimateDeltaStaeckel function to facilitate the return of an estimated delta parameter at every phase space point passed, rather than returning a median of the estimate at each point. • galpy.df.schwarzschilddf:the simple Schwarzschild distribution function for a razor-thin disk (useful for teaching). • Full support for providing inputs to all initializations, methods, and functions as astropy Quantity with units and for providing outputs as astropy Quantities. • galpy.potential.TwoPowerTriaxialPotential, a set of triaxial potentials with iso-density contours that are arbitrary, similar, coaxial ellipsoids whose ‘radial’ density is a (different) power-law at small and large radii: 1/m^alpha/(1+m)^beta-alpha (the triaxial generalization of TwoPowerSphericalPotential, with flattening in the density rather than in the potential; includes triaxial Hernquist and NFW potentials. • galpy.potential.SCFPotential, a class that implements general density/potential pairs through the basis expansion approach to solving the Poisson equation of Hernquist & Ostriker (1992). Also implemented functions to compute the coefficients for a given density function. See more explanation here. • galpy.actionAngle.actionAngleTorus: an experimental interface to Binney & McMillan’s TorusMapper code for computing positions and velocities for given actions and angles. See the installation instructions for how to properly install this. See this section and the galpy.actionAngle API page for documentation. • galpy.actionAngle.actionAngleIsochroneApprox (Bovy 2014) now implemented for the general case of a time-independent potential. • galpy.df.streamgapdf, a module for modeling the effect of a dark-matter subhalo on a tidal stream. See Sanders et al. (2016). Also includes the fast methods for computing the density along the stream and the stream track for a perturbed stream from Bovy et al. (2016). • Orbit.flip can now flip the velocities of an orbit in-place by specifying inplace=True. This allows correct velocities to be easily obtained for backwards-integrated orbits. • galpy.potential.PseudoIsothermalPotential, a standard pseudo-isothermal-sphere potential. galpy.potential.KuzminDiskPotential, a razor-thin disk potential. • Internal transformations between equatorial and Galactic coordinates are now performed by default using astropy’s coordinates module. Transformation of (ra,dec) to Galactic coordinates for general epochs. • Full support for Python 3. • galpy.potential.SnapshotRZPotential, a potential class that can be used to get a frozen snapshot of the potential of an N-body simulation. • Various other potentials: PlummerPotential, a standard Plummer potential; MN3ExponentialDiskPotential, an approximation to an exponential disk using three Miyamoto-Nagai potentials (Smith et al. 2015); KuzminKutuzovStaeckelPotential, a Staeckel potential that can be used to approximate the potential of a disk galaxy (Batsleer & Dejonghe 1994). • Support for converting potential parameters to NEMO format and units. • Orbit fitting in custom sky coordinates.
{"url":"https://docs.galpy.org/en/v1.10.0/whatsnew.html","timestamp":"2024-11-07T20:07:10Z","content_type":"text/html","content_length":"61585","record_id":"<urn:uuid:247db880-16a4-493b-b4b3-a95185c1d91d>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00116.warc.gz"}
Thomae's Theorem -- from Wolfram MathWorld Thomae's theorem, also called Thomae's transformation, is the generalized hypergeometric function identity where gamma function, generalized hypergeometric function, and Dixon's theorem (Slater 1966, p. 52). An equivalent formulation is given by (Hardy 1999, p. 104). The symmetry of this form was used by Ramanujan in his proof of the identity, which is essentially the same as Thomae's. Interestingly, this is one of the few cases in which Ramanujan gives an explicit proof of one of his propositions (Hardy 1999, p. 104). A special case of the theorem is given by (J. Sondow, pers. comm., May 25, 2003).
{"url":"https://mathworld.wolfram.com/ThomaesTheorem.html","timestamp":"2024-11-05T21:26:44Z","content_type":"text/html","content_length":"57887","record_id":"<urn:uuid:5e6e1e2c-aacc-472f-b3e3-ff80dca250af>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00714.warc.gz"}
(PDF) Fostering students’ conceptions about the quantum world – results of an interview study ... In addition to it, there are other research studies in the literature in the education field with the same sample size dedicated to different areas, such as quantum physics [35], mathematics [36], technology [37], or physical education [38]. ...
{"url":"https://www.researchgate.net/publication/351330251_Fostering_students'_conceptions_about_the_quantum_world_-_results_of_an_interview_study","timestamp":"2024-11-14T21:16:48Z","content_type":"text/html","content_length":"776360","record_id":"<urn:uuid:f7da0400-394a-44d9-9463-77922ed8dd6d>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00661.warc.gz"}